Search Results

Search found 65750 results on 2630 pages for 'client application servic'.

Page 35/2630 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Detect Application Shutdown in C# NET?

    - by Michael Pfiffer
    I am writing a small console application (will be ran as a service) that basically starts a Java app when it is running, shuts itself down if the Java app closes, and shuts down the Java app if it closes. I think I have the first two working properly, but I don't know how to detect when the .NET application is shutting down so that I can shutdown the Java app prior to that happening. Google search just returns a bunch of stuff about detecting Windows shutting down. Can anyone tell me how I can handle that part and if the rest looks fine? namespace MinecraftDaemon { class Program { public static void LaunchMinecraft(String file, String memoryValue) { String memParams = "-Xmx" + memoryValue + "M" + " -Xms" + memoryValue + "M "; String args = memParams + "-jar " + file + " nogui"; ProcessStartInfo processInfo = new ProcessStartInfo("java.exe", args); processInfo.CreateNoWindow = true; processInfo.UseShellExecute = false; try { using (Process minecraftProcess = Process.Start(processInfo)) { minecraftProcess.WaitForExit(); } } catch { // Log Error } } static void Main(string[] args) { Arguments CommandLine = new Arguments(args); if (CommandLine["file"] != null && CommandLine["memory"] != null) { // Launch the Application LaunchMinecraft(CommandLine["file"], CommandLine["memory"]); } else { LaunchMinecraft("minecraft_server.jar", "1024"); } } } }

    Read the article

  • Why is JavaScript not used for classical application development (compiled software)?

    - by Jose Faeti
    During my years of web development with JavaScript, I come to the conclusion that it's an incredible powerful language, and you can do amazing things with it. It offers a rich set of features, like: Dynamic typing First-class functions Nested functions Closures Functions as methods Functions as Object constructors Prototype-based Objects-based (almost everything is an object) Regex Array and Object literals It seems to me that almost everything can be achieved with this kind of language, you can also emulate OO programming, since it provides great freedom and many different coding styles. With more software-oriented custom functionalities (I/O, FileSystem, Input devices, etc.) I think it will be great to develop applications with. Though, as far as I know, it's only used in web development or in existing softwares as a scripting language only. Only recently, maybe thanks to the V8 Engine, it's been used more for other kind of tasks (see node.js for example). Why until now it's only be relegated only to web development? What is keeping it away from software development?

    Read the article

  • Can I execute an application built with Quickly (python - pygtk) on MS Windows?

    - by lesco
    I am working with QUICKLY, using Python and PyGtk. I know there is an option for packaging; so I can create a .DEB file. This is for Ubuntu. I was reading about PyGTK and it seems that PyGtk runs on MS Windows too. So, can I execute an app built with Quickly on MS Windows? If so, how? Which is the file (.py) that I have to execute on MS Windows? (a Quickly project has many .py files) Thanks. Ariel

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • When Installing Oracle Client 11.1.0.7 can I use a common directory for Oracle Base?

    - by Anders
    I am trying to install Oracle Client 11.1.0.7 on a Windows Server 2008 64-bit. To some this might not be rocket science but I can't understand what the options under the install screen "Specify Home Details" mean. The defaults given suggest that I use Oracle Base and install software under my own account name. It also suggests that each user should have a separate Oracle Base. This seems counter intuitive to me. I am doing a server install after all. All I want to use the installation for is to connect to an Oracle Database from Reporting Services. Can I safely ignore this and just accept the defaults? What are the implications if I change the location to a common directory?

    Read the article

  • Why does terminal prompt text becomes black when connecting to Ubuntu with Nomachine NX client?

    - by David B
    I'm using Nomachine NX client for Windows to remotely connect to my Ubuntu. Every now and then I experience a strange phenomena: the text in the prompt of all open terminal windows becomes black, so it can't be viewed over the black background. Typed commands are also black, but the results are in normal colors. So I can run stuff, but can't see what I'm typing... After I close all open terminal windows and start a new terminal window, everything goes back to normal. This is rerally annoying and happens quite often. Any idea why?

    Read the article

  • Is there a BitTorrent client that can download files "in sequence"?

    - by Bob M
    Is there a BitTorrent client that can download files "in sequence"? For example, download clip 01.avi (high priority) clip 02.avi (normal priority) clip 03.avi (low priority) clip 04.avi (low) clip 05.avi (low) then when clip 01.avi is done, it will automatically make it: clip 01.avi (high priority) clip 02.avi (high priority) clip 03.avi (normal priority) clip 04.avi (low priority) clip 05.avi (low) this can be useful when download *.rar as well, since download clip.rar, and then clip.r00, r01, r02, in sequence can allow previewing the file by using RAR to recompose the file (even though incomplete file, but will allow previewing). Update: will making all files active still considered a bad use of BT?

    Read the article

  • Backup Client/Server Software that Syncs only the Delta?

    - by Urda
    I have a co-located server, and a desktop computer. I push small things and large ammounts of small files (like my iTunes) into a JungleDisk cloud. If a few files change there, no big deal, the file gets re-uped. For larger files JungleDisk backup isn't helpful. Things like movies and VMware images that change a lot, but I want backed up. Just not to JungleDisk since that would cost me even more money. I am looking for a product, closed or open source (preferably open source) that will sync the change, or delta, to my personal server on a schedule. That way I can keep a copy of my larger things, without paying JungleDisk a ton more since they are in the range of many Gigabytes. Right now these few items are backed up over FTP, and take forever. Both the client and server are windows environments.

    Read the article

  • Installing Oracle Client 11.1.0.7 on Windows Server 2008 64-bit. What does "Install Location" and "S

    - by Anders
    I am trying to install Oracle Client 11.1.0.7 on a Windows Server 2008 64-bit. To some this might not be rocket science but I can't understand what the options under the install screen "Specify Home Details" mean. The defaults given suggest that I use Oracle Base and install software under my own account name. It also suggests that each user should have a separate Oracle Base. This seems counter intuitive to me. I am doing a server install after all. All I want to use the installation for is to connect to an Oracle Database from Reporting Services. Can I safely ignore this and just accept the defaults? What are the implications if I change the location to a common directory?

    Read the article

  • How do I encapsulate the application server from the web and database servers?

    - by SNyamathi
    So I've been doing some reading and it seems like the best practice would be to have separate database, application, and web servers. There are a few things that I've failed to understand - please feel free to recommend any reading materials that would address these topics. Database (assume MySQL) Application server communication: Does the database server do any sort of checks on the SQL commands sent / returned, or is it just a "dumb pipe" that responds to SQL commands by spitting back data? Application server (assume Tomcat) Web Server Almost the reverse here, is it the web server that is more of a pipe to the internet that forwards requests to the application server and spits back responses? I'm not wording this well, but I'm trying to ask - is it the application server that is responsible for validating data received by from requests? ex: Parsing POSTs Validating user logins Encrypting decrypting data Furthermore, how do these two servers communicate? I'm trying to keep things as flexible as possible here, so while I could write a web server in Java and use Java to communicate between the web and app server, that doesn't sound very modular. What if I want to use Python or some other language to replace the web server later on? What if I want to make a non-web facing application used in house written in C++ or something.

    Read the article

  • Client PC not booting when certain TFT plugged in - TFT or graphics card failure?

    - by Chake
    here comes something quite strange: On a client machine (DELL Vostro 420) we experienced problems when booting: when turning on the machine beeps normal but doesn't display anything and doesn't boot. After some testing I found out, that this only happens if one (of the two) monitors (Iiyama ProLite E2472HDD) is plugged in while booting. If the other monitor (TFT 2) is plugged in everything is fine. Here a small illustration, TFT 1 is the bad guy: TFT 1 | TFT 2 | failure x | x | x x | | x | x | After BIOS-Phase I can safely plug in TFT 1 and everything works just fine. The question is, what can be done to avoide this behavior: Change monitor? (Iiyama ProLite E2472HDD) Change graphics card? (GeForce 9800 GT) Other suggestions?

    Read the article

  • Java, server client TCP communication ends with RST

    - by Senne
    I'm trying to figure out if this is normal. Because without errors, a connection should be terminated by: FIN -> <- ACK <- FIN ACK -> I get this at the end of a TCP connection (over SSL, but i also get it with non-encrypted): From To 1494 server client TCP search-agent > 59185 [PSH, ACK] Seq=25974 Ack=49460 Win=63784 Len=50 1495 client server TCP 59185 > search-agent [ACK] Seq=49460 Ack=26024 Win=63565 Len=0 1496 client server TCP 59185 > search-agent [PSH, ACK] Seq=49460 Ack=26024 Win=63565 Len=23 1497 client server TCP 59185 > search-agent [FIN, ACK] Seq=49483 Ack=26024 Win=63565 Len=0 1498 server client TCP search-agent > 59185 [PSH, ACK] Seq=26024 Ack=49484 Win=63784 Len=23 1499 client server TCP 59185 > search-agent [RST, ACK] Seq=49484 Ack=26047 Win=0 Len=0 The client exits normally and reaches socket.close, shouldn't then the connection be shut down normally, without a reset? I can't find anything about the TCP streams of java on google... Here is my code: Server: package Security; import java.io.*; import java.net.*; import javax.net.ServerSocketFactory; import javax.net.ssl.*; import java.util.*; public class SSLDemoServer { private static ServerSocket serverSocket; private static final int PORT = 1234; public static void main(String[] args) throws IOException { int received = 0; String returned; ObjectInputStream input = null; PrintWriter output = null; Socket client; System.setProperty("javax.net.ssl.keyStore", "key.keystore"); System.setProperty("javax.net.ssl.keyStorePassword", "vwpolo"); System.setProperty("javax.net.ssl.trustStore", "key.keystore"); System.setProperty("javax.net.ssl.trustStorePassword", "vwpolo"); try { System.out.println("Trying to set up server ..."); ServerSocketFactory factory = SSLServerSocketFactory.getDefault(); serverSocket = factory.createServerSocket(PORT); System.out.println("Server started!\n"); } catch (IOException ioEx) { System.out.println("Unable to set up port!"); ioEx.printStackTrace(); System.exit(1); } while(true) { client = serverSocket.accept(); System.out.println("Client trying to connect..."); try { System.out.println("Trying to create inputstream..."); input = new ObjectInputStream(client.getInputStream()); System.out.println("Trying to create outputstream..."); output = new PrintWriter(client.getOutputStream(), true); System.out.println("Client successfully connected!"); while( true ) { received = input.readInt(); returned = Integer.toHexString(received); System.out.print(" " + received); output.println(returned.toUpperCase()); } } catch(SSLException sslEx) { System.out.println("Connection failed! (non-SSL connection?)\n"); client.close(); continue; } catch(EOFException eofEx) { System.out.println("\nEnd of client data.\n"); } catch(IOException ioEx) { System.out.println("I/O problem! (correct inputstream?)"); } try { input.close(); output.close(); } catch (Exception e) { } client.close(); System.out.println("Client closed.\n"); } } } Client: package Security; import java.io.*; import java.net.*; import javax.net.ssl.*; import java.util.*; public class SSLDemoClient { private static InetAddress host; private static final int PORT = 1234; public static void main(String[] args) { System.setProperty("javax.net.ssl.keyStore", "key.keystore"); System.setProperty("javax.net.ssl.keyStorePassword", "vwpolo"); System.setProperty("javax.net.ssl.trustStore", "key.keystore"); System.setProperty("javax.net.ssl.trustStorePassword", "vwpolo"); System.out.println("\nCreating SSL socket ..."); SSLSocket socket = null; try { host = InetAddress.getByName("192.168.56.101"); SSLSocketFactory factory = (SSLSocketFactory) SSLSocketFactory.getDefault(); socket = (SSLSocket) factory.createSocket(host, PORT); socket.startHandshake(); } catch(UnknownHostException uhEx) { System.out.println("\nHost ID not found!\n"); System.exit(1); } catch(SSLException sslEx) { System.out.println("\nHandshaking unsuccessful ..."); System.exit(1); } catch (IOException e) { e.printStackTrace(); } System.out.println("\nHandshaking succeeded ...\n"); SSLClientThread client = new SSLClientThread(socket); SSLReceiverThread receiver = new SSLReceiverThread(socket); client.start(); receiver.start(); try { client.join(); receiver.join(); System.out.println("Trying to close..."); socket.close(); } catch(InterruptedException iEx) { iEx.printStackTrace(); } catch(IOException ioEx) { ioEx.printStackTrace(); } System.out.println("\nClient finished."); } } class SSLClientThread extends Thread { private SSLSocket socket; public SSLClientThread(SSLSocket s) { socket = s; } public void run() { try { ObjectOutputStream output = new ObjectOutputStream(socket.getOutputStream()); for( int i = 1; i < 1025; i++) { output.writeInt(i); sleep(10); output.flush(); } output.flush(); sleep(1000); output.close(); } catch(IOException ioEx) { System.out.println("Socket closed or unable to open socket."); } catch(InterruptedException iEx) { iEx.printStackTrace(); } } } class SSLReceiverThread extends Thread { private SSLSocket socket; public SSLReceiverThread(SSLSocket s) { socket = s; } public void run() { String response = null; BufferedReader input = null; try { input = new BufferedReader( new InputStreamReader(socket.getInputStream())); try { response = input.readLine(); while(!response.equals(null)) { System.out.print(response + " "); response = input.readLine(); } } catch(Exception e) { System.out.println("\nEnd of server data.\n"); } input.close(); } catch(IOException ioEx) { ioEx.printStackTrace(); } } }

    Read the article

  • Entity Framework Code-First, OData & Windows Phone Client

    - by Jon Galloway
    Entity Framework Code-First is the coolest thing since sliced bread, Windows  Phone is the hottest thing since Tickle-Me-Elmo and OData is just too great to ignore. As part of the Full Stack project, we wanted to put them together, which turns out to be pretty easy… once you know how.   EF Code-First CTP5 is available now and there should be very few breaking changes in the release edition, which is due early in 2011.  Note: EF Code-First evolved rapidly and many of the existing documents and blog posts which were written with earlier versions, may now be obsolete or at least misleading.   Code-First? With traditional Entity Framework you start with a database and from that you generate “entities” – classes that bridge between the relational database and your object oriented program. With Code-First (Magic-Unicorn) (see Hanselman’s write up and this later write up by Scott Guthrie) the Entity Framework looks at classes you created and says “if I had created these classes, the database would have to have looked like this…” and creates the database for you! By deriving your entity collections from DbSet and exposing them via a class that derives from DbContext, you "turn on" database backing for your POCO with a minimum of code and no hidden designer or configuration files. POCO == Plain Old CLR Objects Your entity objects can be used throughout your applications - in web applications, console applications, Silverlight and Windows Phone applications, etc. In our case, we'll want to read and update data from a Windows Phone client application, so we'll expose the entities through a DataService and hook the Windows Phone client application to that data via proxies.  Piece of Pie.  Easy as cake. The Demo Architecture To see this at work, we’ll create an ASP.NET/MVC application which will act as the host for our Data Service.  We’ll create an incredibly simple data layer using EF Code-First on top of SQLCE4 and we’ll expose the data in a WCF Data Service using the oData protocol.  Our Windows Phone 7 client will instantiate  the data context via a URI and load the data asynchronously. Setting up the Server project with MVC 3, EF Code First, and SQL CE 4 Create a new application of type ASP.NET MVC 3 and name it DeadSimpleServer.  We need to add the latest SQLCE4 and Entity Framework Code First CTP's to our project. Fortunately, NuGet makes that really easy. Open the Package Manager Console (View / Other Windows / Package Manager Console) and type in "Install-Package EFCodeFirst.SqlServerCompact" at the PM> command prompt. Since NuGet handles dependencies for you, you'll see that it installs everything you need to use Entity Framework Code First in your project. PM> install-package EFCodeFirst.SqlServerCompact 'SQLCE (= 4.0.8435.1)' not installed. Attempting to retrieve dependency from source... Done 'EFCodeFirst (= 0.8)' not installed. Attempting to retrieve dependency from source... Done 'WebActivator (= 1.0.0.0)' not installed. Attempting to retrieve dependency from source... Done You are downloading SQLCE from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'SQLCE 4.0.8435.1' You are downloading EFCodeFirst from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkID=206497. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst 0.8' Successfully installed 'WebActivator 1.0.0.0' You are downloading EFCodeFirst.SqlServerCompact from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst.SqlServerCompact 0.8' Successfully added 'SQLCE 4.0.8435.1' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst 0.8' to EfCodeFirst-CTP5 Successfully added 'WebActivator 1.0.0.0' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst.SqlServerCompact 0.8' to EfCodeFirst-CTP5 Note: We're using SQLCE 4 with Entity Framework here because they work really well together from a development scenario, but you can of course use Entity Framework Code First with other databases supported by Entity framework. Creating The Model using EF Code First Now we can create our model class. Right-click the Models folder and select Add/Class. Name the Class Person.cs and add the following code: using System.Data.Entity; namespace DeadSimpleServer.Models { public class Person { public int ID { get; set; } public string Name { get; set; } } public class PersonContext : DbContext { public DbSet<Person> People { get; set; } } } Notice that the entity class Person has no special interfaces or base class. There's nothing special needed to make it work - it's just a POCO. The context we'll use to access the entities in the application is called PersonContext, but you could name it anything you wanted. The important thing is that it inherits DbContext and contains one or more DbSet which holds our entity collections. Adding Seed Data We need some testing data to expose from our service. The simplest way to get that into our database is to modify the CreateCeDatabaseIfNotExists class in AppStart_SQLCEEntityFramework.cs by adding some seed data to the Seed method: protected virtual void Seed( TContext context ) { var personContext = context as PersonContext; personContext.People.Add( new Person { ID = 1, Name = "George Washington" } ); personContext.People.Add( new Person { ID = 2, Name = "John Adams" } ); personContext.People.Add( new Person { ID = 3, Name = "Thomas Jefferson" } ); personContext.SaveChanges(); } The CreateCeDatabaseIfNotExists class name is pretty self-explanatory - when our DbContext is accessed and the database isn't found, a new one will be created and populated with the data in the Seed method. There's one more step to make that work - we need to uncomment a line in the Start method at the top of of the AppStart_SQLCEEntityFramework class and set the context name, as shown here, public static class AppStart_SQLCEEntityFramework { public static void Start() { DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); // Sets the default database initialization code for working with Sql Server Compact databases // Uncomment this line and replace CONTEXT_NAME with the name of your DbContext if you are // using your DbContext to create and manage your database DbDatabase.SetInitializer(new CreateCeDatabaseIfNotExists<PersonContext>()); } } Now our database and entity framework are set up, so we can expose data via WCF Data Services. Note: This is a bare-bones implementation with no administration screens. If you'd like to see how those are added, check out The Full Stack screencast series. Creating the oData Service using WCF Data Services Add a new WCF Data Service to the project (right-click the project / Add New Item / Web / WCF Data Service). We’ll be exposing all the data as read/write.  Remember to reconfigure to control and minimize access as appropriate for your own application. Open the code behind for your service. In our case, the service was called PersonTestDataService.svc so the code behind class file is PersonTestDataService.svc.cs. using System.Data.Services; using System.Data.Services.Common; using System.ServiceModel; using DeadSimpleServer.Models; namespace DeadSimpleServer { [ServiceBehavior( IncludeExceptionDetailInFaults = true )] public class PersonTestDataService : DataService<PersonContext> { // This method is called only once to initialize service-wide policies. public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; config.UseVerboseErrors = true; } } } We're enabling a few additional settings to make it easier to debug if you run into trouble. The ServiceBehavior attribute is set to include exception details in faults, and we're using verbose errors. You can remove both of these when your service is working, as your public production service shouldn't be revealing exception information. You can view the output of the service by running the application and browsing to http://localhost:[portnumber]/PersonTestDataService.svc/: <service xml:base="http://localhost:49786/PersonTestDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> <workspace> <atom:title>Default</atom:title> <collection href="People"> <atom:title>People</atom:title> </collection> </workspace> </service> This indicates that the service exposes one collection, which is accessible by browsing to http://localhost:[portnumber]/PersonTestDataService.svc/People <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base=http://localhost:49786/PersonTestDataService.svc/ xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">People</title> <id>http://localhost:49786/PersonTestDataService.svc/People</id> <updated>2010-12-29T01:01:50Z</updated> <link rel="self" title="People" href="People" /> <entry> <id>http://localhost:49786/PersonTestDataService.svc/People(1)</id> <title type="text"></title> <updated>2010-12-29T01:01:50Z</updated> <author> <name /> </author> <link rel="edit" title="Person" href="People(1)" /> <category term="DeadSimpleServer.Models.Person" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:ID m:type="Edm.Int32">1</d:ID> <d:Name>George Washington</d:Name> </m:properties> </content> </entry> <entry> ... </entry> </feed> Let's recap what we've done so far. But enough with services and XML - let's get this into our Windows Phone client application. Creating the DataServiceContext for the Client Use the latest DataSvcUtil.exe from http://odata.codeplex.com. As of today, that's in this download: http://odata.codeplex.com/releases/view/54698 You need to run it with a few options: /uri - This will point to the service URI. In this case, it's http://localhost:59342/PersonTestDataService.svc  Pick up the port number from your running server (e.g., the server formerly known as Cassini). /out - This is the DataServiceContext class that will be generated. You can name it whatever you'd like. /Version - should be set to 2.0 /DataServiceCollection - Include this flag to generate collections derived from the DataServiceCollection base, which brings in all the ObservableCollection goodness that handles your INotifyPropertyChanged events for you. Here's the console session from when we ran it: <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> Next, to keep things simple, change the Binding on the two TextBlocks within the DataTemplate to Name and ID, <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432"> <TextBlock Text="{Binding Name}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}" /> <TextBlock Text="{Binding ID}" TextWrapping="Wrap" Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Getting The Context In the code-behind you’ll first declare a member variable to hold the context from the Entity Framework. This is named using convention over configuration. The db type is Person and the context is of type PersonContext, You initialize it by providing the URI, in this case using the URL obtained from the Cassini web server, PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); Create a second member variable of type DataServiceCollection<Person> but do not initialize it, DataServiceCollection<Person> people; In the constructor you’ll initialize the DataServiceCollection using the PersonContext, public MainPage() { InitializeComponent(); people = new DataServiceCollection<Person>( context ); Finally, you’ll load the people collection using the LoadAsync method, passing in the fully specified URI for the People collection in the web service, people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); Note that this method runs asynchronously and when it is finished the people  collection is already populated. Thus, since we didn’t need or want to override any of the behavior we don’t implement the LoadCompleted. You can use the LoadCompleted event if you need to do any other UI updates, but you don't need to. The final code is as shown below: using System; using System.Data.Services.Client; using System.Windows; using System.Windows.Controls; using DeadSimpleServer.Models; using Microsoft.Phone.Controls; namespace WindowsPhoneODataTest { public partial class MainPage : PhoneApplicationPage { PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); DataServiceCollection<Person> people; // Constructor public MainPage() { InitializeComponent(); // Set the data context of the listbox control to the sample data // DataContext = App.ViewModel; people = new DataServiceCollection<Person>( context ); people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); DataContext = people; this.Loaded += new RoutedEventHandler( MainPage_Loaded ); } // Handle selection changed on ListBox private void MainListBox_SelectionChanged( object sender, SelectionChangedEventArgs e ) { // If selected index is -1 (no selection) do nothing if ( MainListBox.SelectedIndex == -1 ) return; // Navigate to the new page NavigationService.Navigate( new Uri( "/DetailsPage.xaml?selectedItem=" + MainListBox.SelectedIndex, UriKind.Relative ) ); // Reset selected index to -1 (no selection) MainListBox.SelectedIndex = -1; } // Load data for the ViewModel Items private void MainPage_Loaded( object sender, RoutedEventArgs e ) { if ( !App.ViewModel.IsDataLoaded ) { App.ViewModel.LoadData(); } } } } With people populated we can set it as the DataContext and run the application; you’ll find that the Name and ID are displayed in the list on the Mainpage. Here's how the pieces in the client fit together: Complete source code available here

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Google I/O 2012 - Building Web Applications using Google APIs and JavaScript Client for Google APIs

    Google I/O 2012 - Building Web Applications using Google APIs and JavaScript Client for Google APIs Brendan O'Brien In this session, you will learn how to use the features of the Google API client for JavaScript to build rich web applications. Some of the features we will demonstrate include authentication and CORS. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 866 8 ratings Time: 52:00 More in Science & Technology

    Read the article

  • Security aspects of an ASP.NET that can be pointed out to the client

    - by Maxim V. Pavlov
    I need to write several passages of text in an offer to the client about the security layer in ASP.NET MVC web solution. I am aware of security that comes along with MVC 3 and an improvements in MVC 4. But all of them are non conceptual, except for AntiForgeryToken (AntiXSS) and built-in SQL Injection immunity (with a little of encoding needed by hand). What would be the main point of ASP.NET security I can "show off" in an offer to the client?

    Read the article

  • Client Centric Approach of AJAX using Toolkit

    In this article, the author discusses the client-centric approach for AJAX using the ASP.NET AJAX Toolkit. In general, server-centric (use of Update panel) is very popular in this field. But when it comes to performance, the client-centric approach is preferred. Sandeep also demonstrates how to call a webservice using javascript.

    Read the article

  • Which email client works best with GMail IMAP?

    - by Ivan
    I use GMail (and I use labels intensively) and because of having to use a very slow Internet connection now I've came to the idea that I should try using a desktop email client. What application (Thunderbird, Evolution, Claws, or some another) works best with GMail via IMAP? First of all I want correct GMail labels support (for example an email client shouldn't think of GMail labels as of independent folders, treating messages with multiple labels as multiple different identical messages in different folders), incl. special GMail labels-folders like bin, spam, drafts and sent.

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >