Search Results

Search found 94339 results on 3774 pages for 'system data'.

Page 603/3774 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Before data is entered, is there a way to make a grouped table graphic placeholder?

    - by Matt Winters
    I have a grouped table with 3 sections, each section with a title. The 1st and 3rd sections always have only 1 row of information so before any user data is entered, I just put some words like "Enter Data Here..." as placeholder text. This text is edited (replaced) by the user with their own actual data. No problem. The 2nd section however will contain several rows of information entered by user and I'd prefer not to enter placeholder data in row 0, having the user Edit the first row of data then Add subsequent rows. If the numberOfRowsInSection is set to 0, the title for the 3rd section comes close to the title for the 2nd section and it looks ugly. The best that I could come with, and I don't know to do it, is to have a fake graphic placeholder on the striped background (between the 2nd and third titles) that looks like a single row in the 2nd section, put "Enter Data Here..." text in the graphic, and then the first row of actual data entered and all subsequent rows will cover it up. Can anyone tell me how to do this or offer a better suggestion. Thanks.

    Read the article

  • C# .Net Serial DataReceived Event response too slow for high-rate data.

    - by Matthew
    Hi, I have set up a SerialDataReceivedEventHandler, with a forms based program in VS2008 express. My serial port is set up as follows: 115200, 8N1 Dtr and Rts enabled ReceivedBytesThreshold = 1 I have a device I am interfacing with over a BlueTooth, USB to Serial. Hyper terminal receives the data just fine at any data rate. The data is sent regularly in 22 byte long packets. This device has an adjustable rate at which data is sent. At low data rates, 10-20Hz, the code below works great, no problems. However, when I increase the data rate past 25Hz, there starts to recieve mulitple packets on one call. What I mean by this is that there should be a event trigger for every incoming packet. With higher output rates, I have tested the buffer size (BytesToRead command) immediatly when the event is called and there are multiple packets in the buffer then. I think that the event fires slowly and by the time it reaches the code, more packes have hit the buffer. One test I do is see how many time the event is trigger per second. At 10Hz, I get 10 event triggers, awesome. At 100Hz, I get something like 40 event triggers, not good. My goal for data rate is 100HZ is acceptable, 200Hz preferred, and 300Hz optimum. This should work because even at 300Hz, that is only 52800bps, less than half of the set 115200 baud rate. Anything I am over looking? public Form1() { InitializeComponent(); serialPort1.DataReceived += new SerialDataReceivedEventHandler(serialPort1_DataReceived); } private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e) { this.Invoke(new EventHandler(Display_Results)); } private void Display_Results(object s, EventArgs e) { serialPort1.Read(IMU, 0, serial_Port1.BytesToRead); }

    Read the article

  • How to explain to someone that a data structure should not draw itself, explaining separation of con

    - by leeand00
    I have another programmer who I'm trying to explain why it is that a UI component should not also be a data-structure. For instance say that you get a data-structure that contains a record-set from the "database", and you wish to display that record-set in a UI component within your application. According to this programmer (who will remain nameless, he's young and I'm teaching him...), we should subclass the data-structure into a class that will draw the UI component within our application!!!!!! And thus according to this logic, the record-set should manage the drawing of the UI. **Head Desk*** I know that asking a record-set to draw itself is wrong, because, if you wish to render the same data-structure on more than one type of component on your UI, you are going to have a real mess on your hands; you'll need to extend yet another class for each and every UI component that you render from the base-class of your record-set; I am well aware of the "cleanliness" of the of the MVC pattern (and by that what I really mean is you don't confuse your data (the Model) with your UI (the view) or the actions that take place on the data (the Controller more or less...okay not really the API should really handle that...and the Controller should just make as few calls to it as it can, telling it which view to render)) But it's certainly alot cleaner than using data-structures to render UI components! Is there any other advice I could send his way other than the example above? I understand that when you first learn OOP you go through "a stage" where you where just want to extend everything. Followed by a stage when you think that Design Patterns are the solution every single problem...which isn't entirely correct either...thanks Jeff. Is there a way that I can gently nudge this kid in the right direction? Do you have any more examples that might help explain my point to him?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Ruby on Rails 2.3.5: Populating my prod and devel database with data (migration or fixture?)

    - by randombits
    I need to populate my production database app with data in particular tables. This is before anyone ever even touches the application. This data would also be required in development mode as it's required for testing against. Fixtures are normally the way to go for testing data, but what's the "best practice" for Ruby on Rails to ship this data to the live database also upon db creation? ultimately this is a two part question I suppose. 1) What's the best way to load test data into my database for development, this will be roughly 1,000 items. Is it through a migration or through fixtures? The reason this is a different answer than the question below is that in development, there's certain fields in the tables that I'd like to make random. In production, these fields would all start with the same value of 0. 2) What's the best way to bootstrap a production db with live data I need in it, is this also through a migration or fixture? I think the answer is to seed as described here: http://lptf.blogspot.com/2009/09/seed-data-in-rails-234.html but I need a way to seed for development and seed for production. Also, why bother using Fixtures if seeding is available? When does one seed and when does one use fixtures?

    Read the article

  • How to solve the problem of not being informed of successful payments by the 3rd party system used b

    - by user68759
    I have a subscription based website that interacts with a 3rd party system to handle the payments. The steps to process a new subscriber registration are as follow: The subscriber enters his/her details in the subscription form and click on the submit button. Assuming the details specified are valid, a new record is created in the database to store these details. The subscriber is then redirected to the website of the 3rd party system (similar to paypal) to process the payment. Once the payment is succesful, the 3rd party website then redirect the subscriber back to our website. At this time, I know that the payment was succesful, so the record in the database is updated to indicate that payment has been made successfully. A problem that I have found occurring quite often is that if a subscriber pays but does not complete the process correctly (e.g. uses the back browser, closes the window), his/her record in the database doesn't get updated about this. Accordingly, I don't know if s/he has paid by just looking the record and need to wait for the report from the 3rd party system to find this out. How do you solve this problem? PS. One of the main reasons to store their details into the database before the payment process is done is so they can come back to complete the payment without re-entering their details again. For example, when their credit cards were rejected by the 3rd party system and they need to sort this out with their financial institution which may take a while.

    Read the article

  • Terminate function on System.in .. possible?

    - by Ronald
    I am currently working on a project where I have to make an agent to interact with a server. Each 50ms, the server will receive the last thing I outputted to System.out and send me a new set of lines as a 'state' through the System.in printstream to analyze and send my next message to System.out. Also, if the server receives multiple outputs from me, it only regards the most recent one. .. As for my question: My program originally constructed a tree and then analyzed each leaf node to see which would be optimal, and then waited around for the next input, but I can recursively do a deeper tree search that would make my output 'better' (and again and again to keep returning a better result). Using this and the fact that if the server receives multiple outputs, it only takes the most recent one, I could do each level, print my result and start the next level. But here comes my problem... I can't be stuck in some complex algorithm while I am supposed to receiving the next input as I will then miss it. So I was wondering if there is a way to cancel anything else I am doing when I receive something via System.in and then go back to the beginning of the function and start the search again with the new set of input (and rinse and repeat..) I hope this all makes sense, Thank ye all

    Read the article

  • Adding Data into tables iclueds Foreign Keys but Which New Row created Entity FrameWork ?

    - by programmerist
    Addding Data into kartlar Table (RehberID,KampanyaID,BrimID) is ok. But Which Kart'ID created? i need to learn Which Id created after Adding Data (RehberID,KampanyaID,BrimID) into Kartlar? public static List<Kartlar> SaveKartlar(int RehberID, int KampanyaID, int BrimID, string Notlar) { using (GenSatisModuleEntities genSatisCtx = new GenSatisModuleEntities()) { Kartlar kartlar = new Kartlar(); kartlar.RehberReference.EntityKey = new System.Data.EntityKey("GenSatisModuleEntities.Rehber", "ID", RehberID); kartlar.KampanyaReference.EntityKey = new System.Data.EntityKey("GenSatisModuleEntities.Kampanya", "ID", KampanyaID); kartlar.BirimReference.EntityKey = new System.Data.EntityKey("GenSatisModuleEntities.Birim", "ID", BrimID); kartlar.Notlar = Notlar; genSatisCtx.AddToKartlar(kartlar); genSatisCtx.SaveChanges(); List<Kartlar> kartAddedPatient; kartAddedPatient = (from k in genSatisCtx.Kartlar where k.RehberReference.EntityKey == RehberID && k.KampanyaReference.EntityKey == KampanyaID && k.BirimReference.EntityKey == BrimID select k) return kartAddedPatient ; } } How can i do that? i want to get data from Kartlar which data i added?

    Read the article

  • C++/CLI HTTP Proxy problems...

    - by darkantimatter
    Hi, I'm trying(very hard) to make a small HTTP Proxy server which I can use to save all communications to a file. Seeing as I dont really have any experience in the area, I used a class from codeproject.com and some associated code to get started (It was made in the old CLI syntax, so I converted it). I couldn't get it working, so I added lots more code to make it work (threads etc), and now it sort of works. Basically, it recieves something from a client (I just configured Mozilla Firefox to route its connections through this proxy) and then routes it to google.com. After it sends Mozilla's data to google, recieves a responce, and sends that to Mozilla. This works fine, but then the proxy fails to recieve any data from Mozilla. It just loops in the Sleep(50) section. Anyway, heres the code: ProxyTest.cpp: #include "stdafx.h" #include "windows.h" #include "CHTTPProxy.h" public ref class ClientThread { public: System::Net::Sockets::TcpClient ^ pClient; CHttpProxy ^ pProxy; System::Int32 ^ pRecieveBufferSize; System::Threading::Thread ^ Thread; ClientThread(System::Net::Sockets::TcpClient ^ sClient, CHttpProxy ^ sProxy, System::Int32 ^ sRecieveBufferSize) { pClient = sClient; pProxy = sProxy; pRecieveBufferSize = sRecieveBufferSize; }; void StartReading() { Thread = gcnew System::Threading::Thread(gcnew System::Threading::ThreadStart(this,&ClientThread::ThreadEntryPoint)); Thread->Start(); }; void ThreadEntryPoint() { char * bytess; bytess = new char[(int)pRecieveBufferSize]; memset(bytess, 0, (int)pRecieveBufferSize); array<unsigned char> ^ bytes = gcnew array<unsigned char>((int)pRecieveBufferSize); array<unsigned char> ^ sendbytes; do { if (pClient->GetStream()->DataAvailable) { try { do { Sleep(100); //Lets wait for whole packet to get cached (If it even does...) unsigned int k = pClient->GetStream()->Read(bytes, 0, (int)pRecieveBufferSize); //Read it for(unsigned int i=0; i<(int)pRecieveBufferSize; i++) bytess[i] = bytes[i]; Console::WriteLine("Packet Received:\n"+gcnew System::String(bytess)); pProxy->SendToServer(bytes,pClient->GetStream()); //Now send it to google! pClient->GetStream()->Flush(); } while(pClient->GetStream()->DataAvailable); } catch (Exception ^ e) { break; } } else { Sleep(50); //It just loops here because it thinks mozilla isnt sending anything if (!(pClient->Connected)) break; }; } while (pClient->GetStream()->CanRead); delete [] bytess; pClient->Close(); }; }; int main(array<System::String ^> ^args) { System::Collections::Generic::Stack<ClientThread ^> ^ Clients = gcnew System::Collections::Generic::Stack<ClientThread ^>(); System::Net::Sockets::TcpListener ^ pTcpListener = gcnew System::Net::Sockets::TcpListener(8080); pTcpListener->Start(); System::Net::Sockets::TcpClient ^ pTcpClient; while (1) { pTcpClient = pTcpListener->AcceptTcpClient(); //Wait for client ClientThread ^ Client = gcnew ClientThread(pTcpClient, gcnew CHttpProxy("www.google.com.au", 80), pTcpClient->ReceiveBufferSize); //Make a new object for this client Client->StartReading(); //Start the thread Clients->Push(Client); //Add it to the list }; pTcpListener->Stop(); return 0; } CHTTPProxy.h, from http://www.codeproject.com/KB/IP/howtoproxy.aspx with a lot of modifications: //THIS FILE IS FROM http://www.codeproject.com/KB/IP/howtoproxy.aspx. I DID NOT MAKE THIS! BUT I HAVE MADE SEVERAL MODIFICATIONS! #using <mscorlib.dll> #using <SYSTEM.DLL> using namespace System; using System::Net::Sockets::TcpClient; using System::String; using System::Exception; using System::Net::Sockets::NetworkStream; #include <stdio.h> ref class CHttpProxy { public: CHttpProxy(System::String ^ szHost, int port); System::String ^ m_host; int m_port; void SendToServer(array<unsigned char> ^ Packet, System::Net::Sockets::NetworkStream ^ sendstr); }; CHttpProxy::CHttpProxy(System::String ^ szHost, int port) { m_host = gcnew System::String(szHost); m_port = port; } void CHttpProxy::SendToServer(array<unsigned char> ^ Packet, System::Net::Sockets::NetworkStream ^ sendstr) { TcpClient ^ tcpclnt = gcnew TcpClient(); try { tcpclnt->Connect(m_host,m_port); } catch (Exception ^ e ) { Console::WriteLine(e->ToString()); return; } // Send it if ( tcpclnt ) { NetworkStream ^ networkStream; networkStream = tcpclnt->GetStream(); int size = Packet->Length; networkStream->Write(Packet, 0, size); array<unsigned char> ^ bytes = gcnew array<unsigned char>(tcpclnt->ReceiveBufferSize); char * bytess = new char[tcpclnt->ReceiveBufferSize]; Sleep(500); //Wait for responce do { unsigned int k = networkStream->Read(bytes, 0, (int)tcpclnt->ReceiveBufferSize); //Read from google for(unsigned int i=0; i<k; i++) { bytess[i] = bytes[i]; if (bytess[i] == 0) bytess[i] = ' '; //Dont terminate the string if (bytess[i] < 8) bytess[i] = ' '; //Somethings making the computer beep, and its not 7?!?! }; Console::WriteLine("\n\nAbove packet sent to google. Google Packet Received:\n"+gcnew System::String(bytess)); sendstr->Write(bytes,0,k); //Send it to mozilla Console::WriteLine("\n\nAbove packet sent to client..."); //Sleep(1000); } while(networkStream->DataAvailable); delete [] bytess; } return; } Any help would be much appreciated, I've tried for hours.

    Read the article

  • Update database every 30 minutes once

    - by Ravi
    Hi, I am working on C#.Net Windows application with SQL Server 2005. This project i am using ADO.Net Data-service for database maintenance. I am working on industrial automation domain, here data keep on reading more than 8 hours. After reading data from device based on the trigger i have periodically update data to database. for example start reading data on 9.00AM, trigger firing on 9.50AM. Once trigger fire, last 30 minutes(9.20 AM to 9.50AM) data store into data base. After trigger firing keep on reading data from device and store into data base. 10.00AM trigger going to off at time storing data to database has to be stop. Again trigger firing on 11.00AM. Once trigger fire, last 30 minutes(10.30 AM to 11.00AM) data store into data base. After trigger firing keep on reading data from device and store into data base. After 10.00AM trigger not firing means data keep on store locally. Here i don't know until trigger fire keep on reading data where & how to maintain temporarily , After trigger firing, last 30 minutes data how to bring and store into database. I don't know how to achieve it. It would be great if anyone could suggest any idea. Thanks

    Read the article

  • C#: Using one Data Table in order to fill 2 different comboboxes?

    - by odiseh
    Hi all. I have 2 comboboxes on a form (form1) called combobox1 and combobox2. Each comboboxes should be filled with data stored in 2 different tables in Sql server 2005: table1 and table2 I mean: combobox1 -- table1 combobox2 -- table2 I fill data table with proper data and then bind the comboboxes to it separately. My problem is: after filling 2 combos, both of them have equal data got from table2. This is my code: DataTable tb1 = new DataTable(); //Filling tb1 with data got from table1 combobox1.Items.Clear(); combobox1.DataSource = tb1; combobox1.DisplayMember = "Name"; combobox1.ValueMember = "ID"; combobox1.SelectedIndex = -1; //filling tb1 with data got from table2 combobox2.Items.Clear(); combobox2.DataSource = tb1; combobox2.DisplayMember = "Name"; combobox2.ValueMember = "ID"; combobox2.SelectedIndex = -1; What's wrong? It seems that if I get 2 different data tables (tb1 and tb2) , every thing will be all right. Any suggestions please. Thank you

    Read the article

  • AngularJS: Using Shared Service(with $resource) to share data between controllers, but how to define callback functions?

    - by shaunlim
    Note: I also posted this question on the AngularJS mailing list here: https://groups.google.com/forum/#!topic/angular/UC8_pZsdn2U Hi All, I'm building my first AngularJS app and am not very familiar with Javascript to begin with so any guidance will be much appreciated :) My App has two controllers, ClientController and CountryController. In CountryController, I'm retrieving a list of countries from a CountryService that uses the $resource object. This works fine, but I want to be able to share the list of countries with the ClientController. After some research, I read that I should use the CountryService to store the data and inject that service into both controllers. This was the code I had before: CountryService: services.factory('CountryService', function($resource) { return $resource('http://localhost:port/restwrapper/client.json', {port: ':8080'}); }); CountryController: //Get list of countries //inherently async query using deferred promise $scope.countries = CountryService.query(function(result){ //preselected first entry as default $scope.selected.country = $scope.countries[0]; }); And after my changes, they look like this: CountryService: services.factory('CountryService', function($resource) { var countryService = {}; var data; var resource = $resource('http://localhost:port/restwrapper/country.json', {port: ':8080'}); var countries = function() { data = resource.query(); return data; } return { getCountries: function() { if(data) { console.log("returning cached data"); return data; } else { console.log("getting countries from server"); return countries(); } } }; }); CountryController: $scope.countries = CountryService.getCountries(function(result){ console.log("i need a callback function here..."); }); The problem is that I used to be able to use the callback function in $resource.query() to preselect a default selection, but now that I've moved the query() call to within my CountryService, I seemed to have lost what. What's the best way to go about solving this problem? Thanks for your help, Shaun

    Read the article

  • Is there a way to prevent SQL Server silently truncating data in local variables and stored procedure parameters?

    - by Luke Woodward
    I recently encountered an issue while porting an app to SQL Server. It turned out that this issue was caused by a stored procedure parameter being declared too short for the data being passed to it: the parameter was declared as VARCHAR(100) but in one case was being passed more than 100 characters of data. What surprised me was that SQL Server didn't report any errors or warnings -- it just silently truncated the data to 100 characters. The following SQLCMD session demonstrates this: 1 create procedure WhereHasMyDataGone (@data varchar(5)) as 2 begin 3 print 'Your data is ''' + @data + '''.'; 4 end; 5 go 1 exec WhereHasMyDataGone '123456789'; 2 go Your data is '12345'. Local variables also exhibit the same behaviour: 1 declare @s varchar(5) = '123456789'; 2 print @s; 3 go 12345 Is there an option I can enable to have SQL Server report errors (or at least warnings) in such situations? Or should I just declare all local variables and stored procedure parameters as VARCHAR(MAX) or NVARCHAR(MAX)?

    Read the article

  • What techniques do you use for emitting data from the server that will solely be used in client side scripts?

    - by chuck
    Hi all, I never found an optimal solution for this problem so I am hoping that some of you out there have a few solutions. Let's say I need to render out a list of checkboxes and each checkbox has a set of additional data that goes with it. This data will be used purely in the context of javascript and jquery. My usual strategy is to render this data in hidden fields that are grouped in the same container as the checkbox. My rendered HTML will look something like this: <div> <input type="checkbox" /> <input type="hidden" class="genreId" /> <input type="hidden" class="titleId" /> </div> My only problem with this is that the data in the hidden fields get posted to the server when the form is submitted. For small amounts of data, this is fine. However, I frequently work with large datasets and a large amount of data is needlessly transferred. UPDATE: Before submitting this post, I just saw that I can add a "DISABLED" attribute to my input element to suppress the submission of data. Is this pretty much the best approach that I can take? Thanks

    Read the article

  • Code producing System.NullReferenceException error for Membership.GetUser(). This is VB.Net (ASP.Net 4)

    - by Derrek
    I have a Default.aspx page that is not static. I have added functionality with datalist and sqldatasources. When a user logins he/she will see items like saved workouts, saved equipment, total replys, etc... This is based on getting the currently logged in user UserID. Quite simply this works great when the user is logged in. However, I do not want to force a user to login to view the Default page because it does have functionality on it that does not require login. When a user is not logged in of course I receive the [System.NullReferenceException] error. I understand the error well but I do not know how to code to fix it. That is where I need help. I will admit I am more designer than developer. However, I do know the exception error I am receivving is caused by me not setting a value in my code when a user is not logged in. I do not know how to do that and have for a week made unsuccessful attempts at writing the code. Both sets of code below compile for VB.Net/ASP.Net 4/Visual Studio 2010 without errors. However, I still get the System.NullReferenceException error if not logged in. I know it can be done but I do not know the right syntax. If you can help please insert you code in mine or write it out. JUST TELLING ME WHERE TO GO TO FIND AN ANSWER WON'T HELP. I HAVE DONE THAT FOR 7 STRAIGHT DAYS. I APPRECIATE OUR HELP. Partial Class _Default Inherits System.Web.UI.Page Protected Sub SqlDataSource4_Selecting(ByVal sender As Object, ByVal e As SqlDataSourceCommandEventArgs) Handles SqlDataSource4.Selecting Dim MemUser As MembershipUser MemUser = Membership.GetUser() If Not MemUser Is DBNull.Value Then UserID.Text = MemUser.ProviderUserKey.ToString() e.Command.Parameters("@UserId").Value = MemUser.ProviderUserKey.ToString() End If End Sub -------------------------------------ORIGINAL CODE------------------------------- Partial Class _Default Inherits System.Web.UI.Page Protected Sub SqlDataSource4_Selecting(ByVal sender As Object, ByVal e As SqlDataSourceCommandEventArgs) Handles SqlDataSource4.Selecting Dim MemUser As MembershipUser MemUser = Membership.GetUser() UserID.Text = MemUser.ProviderUserKey.ToString() e.Command.Parameters("@UserId").Value = MemUser.ProviderUserKey.ToString() End Sub

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • Jquery mobile email form with php script

    - by Pablo Marino
    i'm working in a mobile website and i'm having problems with my email form The problem is that when I receive the mail sent through the form, it contains no data. All the data entered by the user is missing. Here is the form code <form action="correo.php" method="post" data-ajax="false"> <div> <p><strong>You can contact us by filling the following form</strong></p> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="first_name">First Name</label> <input id="first_name" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="last_name">Last name</label> <input id="last_name" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="email">Email</label> <input id="email" placeholder="" type="email" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="telephone">Phone</label> <input id="telephone" placeholder="" type="tel" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="age">Age</label> <input id="age" placeholder="" type="number" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup" data-mini="true"> <label for="country">Country</label> <input id="country" placeholder="" type="text" /> </fieldset> </div> <div data-role="fieldcontain"> <fieldset data-role="controlgroup"> <label for="message">Your message</label> <textarea id="message" placeholder="" data-mini="true"></textarea> </fieldset> </div> <input data-inline="true" data-theme="e" value="Submit" data-mini="true" type="submit" /> </form> Here is the PHP code <? /* aqui se incializan variables de PHP */ if (phpversion() >= "4.2.0") { if ( ini_get('register_globals') != 1 ) { $supers = array('_REQUEST', '_ENV', '_SERVER', '_POST', '_GET', '_COOKIE', '_SESSION', '_FILES', '_GLOBALS' ); foreach( $supers as $__s) { if ( (isset($$__s) == true) && (is_array( $$__s ) == true) ) extract( $$__s, EXTR_OVERWRITE ); } unset($supers); } } else { if ( ini_get('register_globals') != 1 ) { $supers = array('HTTP_POST_VARS', 'HTTP_GET_VARS', 'HTTP_COOKIE_VARS', 'GLOBALS', 'HTTP_SESSION_VARS', 'HTTP_SERVER_VARS', 'HTTP_ENV_VARS' ); foreach( $supers as $__s) { if ( (isset($$__s) == true) && (is_array( $$__s ) == true) ) extract( $$__s, EXTR_OVERWRITE ); } unset($supers); } } /* DE AQUI EN ADELANTE PUEDES EDITAR EL ARCHIVO */ /* reclama si no se ha rellenado el campo email en el formulario */ /* aquí se especifica la pagina de respuesta en caso de envío exitoso */ $respuesta="index.html#respuesta"; // la respuesta puede ser otro archivo, en incluso estar en otro servidor /* AQUÍ ESPECIFICAS EL CORREO AL CUAL QUEIRES QUE SE ENVÍEN LOS DATOS DEL FORMULARIO, SI QUIERES ENVIAR LOS DATOS A MÁS DE UN CORREO, LOS PUEDES SEPARAR POR COMAS */ $para ="[email protected];" /* this is not my real email, of course */ /* AQUI ESPECIFICAS EL SUJETO (Asunto) DEL EMAIL */ $sujeto = "Mail de la página - Movil"; /* aquí se construye el encabezado del correo, en futuras versiones del script explicaré mejor esta parte */ $encabezado = "From: $first_name $last_name <$email>"; $encabezado .= "\nReply-To: $email"; $encabezado .= "\nX-Mailer: PHP/" . phpversion(); /* con esto se captura la IP del que envío el mensaje */ $ip=$REMOTE_ADDR; /* las siguientes líneas arman el mensaje */ $mensaje .= "NOMBRE: $first_name\n"; $mensaje .= "APELLIDO: $last_name\n"; $mensaje .= "EMAIL: $email\n"; $mensaje .= "TELEFONO: $telephone\n"; $mensaje .= "EDAD: $age\n"; $mensaje .= "PAIS: $country\n"; $mensaje .= "MENSAJE: $message\n"; /* aqui se intenta enviar el correo, si no se tiene éxito se da un mensaje de error */ if(!mail($para, $sujeto, $mensaje, $encabezado)) { echo "<h1>No se pudo enviar el Mensaje</h1>"; exit(); } else { /* aqui redireccionamos a la pagina de respuesta */ echo "<meta HTTP-EQUIV='refresh' content='1;url=$respuesta'>"; } ?> Any help please? Thanks in advance Pablo

    Read the article

  • Java Port Socket Programming Error

    - by atrus-darkstone
    Hi- I have been working on a java client-server program using port sockets. The goal of this program is for the client to take a screenshot of the machine it is running on, break the RGB info of this image down into integers and arrays, then send this info over to the server, where it is reconstructed into a new image file. However, when I run the program I am experiencing the following two bugs: The first number recieved by the server, no matter what its value is according to the client, is always 49. The client only sends(or the server only receives?) the first value, then the program hangs forever. Any ideas as to why this is happening, and what I can do to fix it? The code for both client and server is below. Thanks! CLIENT: import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.image.BufferedImage; import java.io.*; import java.net.Socket; import java.text.SimpleDateFormat; import java.util.*; import javax.swing.*; import javax.swing.Timer; public class ViewerClient implements ActionListener{ private Socket vSocket; private BufferedReader in; private PrintWriter out; private Robot robot; // static BufferedReader orders = null; public ViewerClient() throws Exception{ vSocket = null; in = null; out = null; robot = null; } public void setVSocket(Socket vs) { vSocket = vs; } public void setInput(BufferedReader i) { in = i; } public void setOutput(PrintWriter o) { out = o; } public void setRobot(Robot r) { robot = r; } /*************************************************/ public Socket getVSocket() { return vSocket; } public BufferedReader getInput() { return in; } public PrintWriter getOutput() { return out; } public Robot getRobot() { return robot; } public void run() throws Exception{ int speed = 2500; int pause = 5000; Timer timer = new Timer(speed, this); timer.setInitialDelay(pause); // System.out.println("CLIENT: Set up timer."); try { setVSocket(new Socket("Alex-PC", 4444)); setInput(new BufferedReader(new InputStreamReader(getVSocket().getInputStream()))); setOutput(new PrintWriter(getVSocket().getOutputStream(), true)); setRobot(new Robot()); // System.out.println("CLIENT: Established connection and IO ports."); // timer.start(); captureScreen(nameImage()); }catch(Exception e) { System.err.println(e); } } public void captureScreen(String fileName) throws Exception{ Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); BufferedImage image = getRobot().createScreenCapture(screenRectangle); int width = image.getWidth(); int height = image.getHeight(); int[] pixelData = new int[(width * height)]; image.getRGB(0,0, width, height, pixelData, width, height); byte[] imageData = new byte[(width * height)]; String fromServer = null; if((fromServer = getInput().readLine()).equals("READY")) { sendWidth(width); sendHeight(height); sendArrayLength((width * height)); sendImageInfo(fileName); sendImageData(imageData); } /* System.out.println(imageData.length); String fromServer = null; for(int i = 0; i < pixelData.length; i++) { imageData[i] = ((byte)pixelData[i]); } System.out.println("CLIENT: Pixel data successfully converted to byte data."); System.out.println("CLIENT: Waiting for ready message..."); if((fromServer = getInput().readLine()).equals("READY")) { System.out.println("CLIENT: Ready message recieved."); getOutput().println("SENDING ARRAY LENGTH..."); System.out.println("CLIENT: Sending array length..."); System.out.println("CLIENT: " + imageData.length); getOutput().println(imageData.length); System.out.println("CLIENT: Array length sent."); getOutput().println("SENDING IMAGE..."); System.out.println("CLIENT: Sending image data..."); for(int i = 0; i < imageData.length; i++) { getOutput().println(imageData[i]); } System.out.println("CLIENT: Image data sent."); getOutput().println("SENDING IMAGE WIDTH..."); System.out.println("CLIENT: Sending image width..."); getOutput().println(width); System.out.println("CLIENT: Image width sent."); getOutput().println("SENDING IMAGE HEIGHT..."); System.out.println("CLIENT: Sending image height..."); getOutput().println(height); System.out.println("CLIENT: Image height sent..."); getOutput().println("SENDING IMAGE INFO..."); System.out.println("CLIENT: Sending image info..."); getOutput().println(fileName); System.out.println("CLIENT: Image info sent."); getOutput().println("FINISHED."); System.out.println("Image data sent successfully."); } if((fromServer = getInput().readLine()).equals("CLOSE DOWN")) { getOutput().close(); getInput().close(); getVSocket().close(); } */ } public String nameImage() throws Exception { String dateFormat = "yyyy-MM-dd HH-mm-ss"; Calendar cal = Calendar.getInstance(); SimpleDateFormat sdf = new SimpleDateFormat(dateFormat); String fileName = sdf.format(cal.getTime()); return fileName; } public void sendArrayLength(int length) throws Exception { getOutput().println("SENDING ARRAY LENGTH..."); getOutput().println(length); } public void sendWidth(int width) throws Exception { getOutput().println("SENDING IMAGE WIDTH..."); getOutput().println(width); } public void sendHeight(int height) throws Exception { getOutput().println("SENDING IMAGE HEIGHT..."); getOutput().println(height); } public void sendImageData(byte[] imageData) throws Exception { getOutput().println("SENDING IMAGE..."); for(int i = 0; i < imageData.length; i++) { getOutput().println(imageData[i]); } } public void sendImageInfo(String info) throws Exception { getOutput().println("SENDING IMAGE INFO..."); getOutput().println(info); } public void actionPerformed(ActionEvent a){ String message = null; try { if((message = getInput().readLine()).equals("PROCESSING...")) { if((message = getInput().readLine()).equals("IMAGE RECIEVED SUCCESSFULLY.")) { captureScreen(nameImage()); } } }catch(Exception e) { JOptionPane.showMessageDialog(null, "Problem: " + e); } } } SERVER: import java.awt.image.BufferedImage; import java.io.*; import java.net.*; import javax.imageio.ImageIO; /*IMPORTANT TODO: * 1. CLOSE ALL STREAMS AND SOCKETS WITHIN CLIENT AND SERVER! * 2. PLACE MAIN EXEC CODE IN A TIMED WHILE LOOP TO SEND FILE EVERY X SECONDS * */ public class ViewerServer { private ServerSocket vServer; private Socket vClient; private PrintWriter out; private BufferedReader in; private byte[] imageData; private int width; private int height; private String imageInfo; private int[] rgbData; private boolean active; public ViewerServer() throws Exception{ vServer = null; vClient = null; out = null; in = null; imageData = null; width = 0; height = 0; imageInfo = null; rgbData = null; active = true; } public void setVServer(ServerSocket vs) { vServer = vs; } public void setVClient(Socket vc) { vClient = vc; } public void setOutput(PrintWriter o) { out = o; } public void setInput(BufferedReader i) { in = i; } public void setImageData(byte[] imDat) { imageData = imDat; } public void setWidth(int w) { width = w; } public void setHeight(int h) { height = h; } public void setImageInfo(String im) { imageInfo = im; } public void setRGBData(int[] rd) { rgbData = rd; } public void setActive(boolean a) { active = a; } /***********************************************/ public ServerSocket getVServer() { return vServer; } public Socket getVClient() { return vClient; } public PrintWriter getOutput() { return out; } public BufferedReader getInput() { return in; } public byte[] getImageData() { return imageData; } public int getWidth() { return width; } public int getHeight() { return height; } public String getImageInfo() { return imageInfo; } public int[] getRGBData() { return rgbData; } public boolean getActive() { return active; } public void run() throws Exception{ connect(); setActive(true); while(getActive()) { recieve(); } close(); } public void recieve() throws Exception{ String clientStatus = null; int clientData = 0; // System.out.println("SERVER: Sending ready message..."); getOutput().println("READY"); // System.out.println("SERVER: Ready message sent."); if((clientStatus = getInput().readLine()).equals("SENDING IMAGE WIDTH...")) { setWidth(getInput().read()); System.out.println("Width: " + getWidth()); } if((clientStatus = getInput().readLine()).equals("SENDING IMAGE HEIGHT...")) { setHeight(getInput().read()); System.out.println("Height: " + getHeight()); } if((clientStatus = getInput().readLine()).equals("SENDING ARRAY LENGTH...")) { clientData = getInput().read(); setImageData(new byte[clientData]); System.out.println("Array length: " + clientData); } if((clientStatus = getInput().readLine()).equals("SENDING IMAGE INFO...")) { setImageInfo(getInput().readLine()); System.out.println("Image Info: " + getImageInfo()); } if((clientStatus = getInput().readLine()).equals("SENDING IMAGE...")) { for(int i = 0; i < getImageData().length; i++) { getImageData()[i] = ((byte)getInput().read()); } } if((clientStatus = getInput().readLine()).equals("FINISHED.")) { getOutput().println("PROCESSING..."); setRGBData(new int[getImageData().length]); for(int i = 0; i < getRGBData().length; i++) { getRGBData()[i] = getImageData()[i]; } BufferedImage image = null; image.setRGB(0, 0, getWidth(), getHeight(), getRGBData(), getWidth(), getHeight()); ImageIO.write(image, "png", new File(imageInfo + ".png")); //create an image file out of the screenshot getOutput().println("IMAGE RECIEVED SUCCESSFULLY."); } } public void connect() throws Exception { setVServer(new ServerSocket(4444)); //establish server connection // System.out.println("SERVER: Connection established."); setVClient(getVServer().accept()); //accept client connection request // System.out.println("SERVER: Accepted connection request."); setOutput(new PrintWriter(vClient.getOutputStream(), true)); //set up an output channel setInput(new BufferedReader(new InputStreamReader(vClient.getInputStream()))); //set up an input channel // System.out.println("SERVER: Created IO ports."); } public void close() throws Exception { getOutput().close(); getInput().close(); getVClient().close(); getVServer().close(); } }

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • "Something wicked happened" error in apt-get

    - by Dragon
    Everytime I try to install through terminal I get this" I am not able to install or update and I can't find working answer for this here. Here is my apt-get update result: Hit http://ppa.launchpad.net raring Release.gpg Hit http://deb.opera.com stable Release.gpg Hit http://ppa.launchpad.net raring Release.gpg Hit http://deb.opera.com stable Release Hit http://ppa.launchpad.net raring Release Hit http://deb.opera.com stable/non-free i386 Packages Hit http://ppa.launchpad.net raring Release Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://ppa.launchpad.net raring/main i386 Packages Ign http://deb.opera.com stable/non-free Translation-en_US Ign http://deb.opera.com stable/non-free Translation-en Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Err http://archive.ubuntu.com raring Release.gpg Something wicked happened resolving 'archive.ubuntu.com:http' (-11 - System error) Err http://extras.ubuntu.com raring Release.gpg Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) Err http://dl.google.com stable Release.gpg Something wicked happened resolving 'dl.google.com:http' (-11 - System error) Ign https://private-ppa.launchpad.net quantal Release.gpg Err http://archive.ubuntu.com raring-updates Release.gpg Something wicked happened resolving 'archive.ubuntu.com:http' (-11 - System error) Get:1 http://archive.ubuntu.com raring-security Release.gpg [933 B] Get:2 http://archive.ubuntu.com raring-proposed Release.gpg [933 B] Get:3 http://archive.ubuntu.com raring-backports Release.gpg [933 B] Hit http://archive.ubuntu.com raring Release Get:4 http://archive.ubuntu.com raring-updates Release [40.8 kB] Get:5 http://archive.ubuntu.com raring-security Release [40.8 kB] Ign http://extras.ubuntu.com raring Release Ign http://dl.google.com stable Release Ign https://private-ppa.launchpad.net quantal Release Get:6 http://archive.ubuntu.com raring-proposed Release [40.8 kB] Hit http://archive.ubuntu.com raring-backports Release Ign http://archive.ubuntu.com raring/main Sources/DiffIndex Ign http://archive.ubuntu.com raring/restricted Sources/DiffIndex Ign http://archive.ubuntu.com raring/universe Sources/DiffIndex Ign http://archive.ubuntu.com raring/multiverse Sources/DiffIndex Ign http://archive.ubuntu.com raring/main i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring/restricted i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring/universe i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring/multiverse i386 Packages/DiffIndex Hit http://archive.ubuntu.com raring/main Translation-en Hit http://archive.ubuntu.com raring/multiverse Translation-en Hit http://archive.ubuntu.com raring/restricted Translation-en Ign http://dl.google.com stable/main i386 Packages/DiffIndex Hit http://archive.ubuntu.com raring/universe Translation-en Ign http://archive.ubuntu.com raring-updates/main Sources/DiffIndex Ign http://archive.ubuntu.com raring-updates/restricted Sources/DiffIndex Ign http://archive.ubuntu.com raring-updates/universe Sources/DiffIndex Ign https://private-ppa.launchpad.net quantal/main i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/multiverse Sources/DiffIndex Ign http://archive.ubuntu.com raring-updates/main i386 Packages/DiffIndex Hit http://dl.google.com stable/main i386 Packages Ign http://archive.ubuntu.com raring-updates/restricted i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/universe i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/multiverse i386 Packages/DiffIndex Hit http://archive.ubuntu.com raring-updates/main Translation-en Hit http://archive.ubuntu.com raring-updates/multiverse Translation-en Hit http://archive.ubuntu.com raring-updates/restricted Translation-en Hit http://archive.ubuntu.com raring-updates/universe Translation-en Get:7 http://archive.ubuntu.com raring-security/main Sources [24.7 kB] Get:8 http://archive.ubuntu.com raring-security/restricted Sources [14 B] Get:9 http://archive.ubuntu.com raring-security/universe Sources [4,802 B] Get:10 http://archive.ubuntu.com raring-security/multiverse Sources [690 B] Hit https://private-ppa.launchpad.net quantal/main i386 Packages Ign http://dl.google.com stable/main Translation-en_US Get:11 http://archive.ubuntu.com raring-security/main i386 Packages [67.9 kB] Ign http://dl.google.com stable/main Translation-en Get:12 http://archive.ubuntu.com raring-security/restricted i386 Packages [14 B] Get:13 http://archive.ubuntu.com raring-security/universe i386 Packages [19.2 kB] Get:14 http://archive.ubuntu.com raring-security/multiverse i386 Packages [1,403 B] Hit http://archive.ubuntu.com raring-security/main Translation-en Ign http://extras.ubuntu.com raring/main Sources/DiffIndex Hit http://archive.ubuntu.com raring-security/multiverse Translation-en Hit http://archive.ubuntu.com raring-security/restricted Translation-en Hit http://archive.ubuntu.com raring-security/universe Translation-en Get:15 http://archive.ubuntu.com raring-proposed/universe i386 Packages [18.0 kB] Get:16 http://archive.ubuntu.com raring-proposed/main i386 Packages [29.9 kB] Get:17 http://archive.ubuntu.com raring-proposed/multiverse i386 Packages [14 B] Get:18 http://archive.ubuntu.com raring-proposed/restricted i386 Packages [14 B] Hit http://archive.ubuntu.com raring-proposed/main Translation-en Hit http://archive.ubuntu.com raring-proposed/multiverse Translation-en Hit http://archive.ubuntu.com raring-proposed/restricted Translation-en Hit http://archive.ubuntu.com raring-proposed/universe Translation-en Hit http://archive.ubuntu.com raring-backports/multiverse i386 Packages Hit http://archive.ubuntu.com raring-backports/main i386 Packages Hit http://archive.ubuntu.com raring-backports/restricted i386 Packages Hit http://archive.ubuntu.com raring-backports/universe i386 Packages Hit http://archive.ubuntu.com raring-backports/main Translation-en Ign https://private-ppa.launchpad.net quantal/main Translation-en_US Hit http://archive.ubuntu.com raring-backports/multiverse Translation-en Hit http://archive.ubuntu.com raring-backports/restricted Translation-en Hit http://archive.ubuntu.com raring-backports/universe Translation-en Hit http://archive.ubuntu.com raring/main Sources Ign https://private-ppa.launchpad.net quantal/main Translation-en Hit http://archive.ubuntu.com raring/restricted Sources Hit http://archive.ubuntu.com raring/universe Sources Hit http://archive.ubuntu.com raring/multiverse Sources Hit http://archive.ubuntu.com raring/main i386 Packages Hit http://archive.ubuntu.com raring/restricted i386 Packages Hit http://archive.ubuntu.com raring/universe i386 Packages Hit http://archive.ubuntu.com raring/multiverse i386 Packages Get:19 http://archive.ubuntu.com raring-updates/main Sources [37.0 kB] Get:20 http://archive.ubuntu.com raring-updates/restricted Sources [14 B] Get:21 http://archive.ubuntu.com raring-updates/universe Sources [49.8 kB] Ign http://extras.ubuntu.com raring/main i386 Packages/DiffIndex Get:22 http://archive.ubuntu.com raring-updates/multiverse Sources [690 B] Get:23 http://archive.ubuntu.com raring-updates/main i386 Packages [93.5 kB] Get:24 http://archive.ubuntu.com raring-updates/restricted i386 Packages [14 B] Get:25 http://archive.ubuntu.com raring-updates/universe i386 Packages [94.2 kB] Get:26 http://archive.ubuntu.com raring-updates/multiverse i386 Packages [1,403 B] Ign http://archive.ubuntu.com raring/main Translation-en_US Ign http://archive.ubuntu.com raring/multiverse Translation-en_US Ign http://archive.ubuntu.com raring/restricted Translation-en_US Ign http://archive.ubuntu.com raring/universe Translation-en_US Ign http://archive.ubuntu.com raring-updates/main Translation-en_US Ign http://archive.ubuntu.com raring-updates/multiverse Translation-en_US Ign http://archive.ubuntu.com raring-updates/restricted Translation-en_US Ign http://archive.ubuntu.com raring-updates/universe Translation-en_US Ign http://archive.ubuntu.com raring-security/main Translation-en_US Ign http://archive.ubuntu.com raring-security/multiverse Translation-en_US Ign http://archive.ubuntu.com raring-security/restricted Translation-en_US Ign http://archive.ubuntu.com raring-security/universe Translation-en_US Ign http://archive.ubuntu.com raring-proposed/main Translation-en_US Ign http://archive.ubuntu.com raring-proposed/multiverse Translation-en_US Ign http://archive.ubuntu.com raring-proposed/restricted Translation-en_US Ign http://archive.ubuntu.com raring-proposed/universe Translation-en_US Ign http://archive.ubuntu.com raring-backports/main Translation-en_US Ign http://archive.ubuntu.com raring-backports/multiverse Translation-en_US Ign http://archive.ubuntu.com raring-backports/restricted Translation-en_US Ign http://archive.ubuntu.com raring-backports/universe Translation-en_US Err http://extras.ubuntu.com raring/main Translation-en_US Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) Err http://extras.ubuntu.com raring/main Translation-en Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) Err http://extras.ubuntu.com raring/main Sources Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) Err http://extras.ubuntu.com raring/main i386 Packages Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) Fetched 568 kB in 8min 0s (1,181 B/s) W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/Release.gpg Something wicked happened resolving 'archive.ubuntu.com:http' (-11 - System error) W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/Release.gpg Something wicked happened resolving 'archive.ubuntu.com:http' (-11 - System error) W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/raring/Release.gpg Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) W: Failed to fetch http://dl.google.com/linux/chrome/deb/dists/stable/Release.gpg Something wicked happened resolving 'dl.google.com:http' (-11 - System error) W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/raring/main/i18n/Translation-en_US Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/raring/main/i18n/Translation-en Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/raring/main/source/Sources Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/raring/main/binary-i386/Packages Something wicked happened resolving 'extras.ubuntu.com:http' (-11 - System error) E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Getting PATH right for python after MacPorts install

    - by BenjaminGolder
    I can't import some python libraries (PIL, psycopg2) that I just installed with MacPorts. I looked through these forums, and tried to adjust my PATH variable in $HOME/.bash_profile in order to fix this but it did not work. I added the location of PIL and psycopg2 to PATH. I know that Terminal is a version of python in /usr/local/bin, rather than the one installed by MacPorts at /opt/local/bin. Do I need to use the MacPorts version of Python in order to ensure that PIL and psycopg2 are on sys.path when I use python in Terminal? Should I switch to the MacPorts version of Python, or will that cause more problems? In case it is helpful, here are more facts: PIl and psycopg2 are installed in /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages which pythonreturns/usr/bin/python echo $PATHreturns (I separated each path for easy reading): :/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ :/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages :/opt/local/bin :/opt/local/sbin :/usr/local/git/bin :/usr/bin :/bin :/usr/sbin :/sbin :/usr/local/bin :/usr/local/git/bin :/usr/X11/bin :/opt/local/bin in python, sys.path returns: /Library/Frameworks/SQLite3.framework/Versions/3/Python /Library/Python/2.6/site-packages/numpy-override /Library/Frameworks/GDAL.framework/Versions/1.7/Python/site-packages /Library/Frameworks/cairo.framework/Versions/1/Python /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload /Library/Python/2.6/site-packages /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/PyObjC /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/wx-2.8-mac-unicode I welcome any criticism and comments, if any of the above looks foolish or poorly conceived. I'm new to all of this. Thanks! Running OSX 10.6.5 on a MacBook Pro, invoking python 2.6.1 from Terminal

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >