Search Results

Search found 30347 results on 1214 pages for 'jamie read'.

Page 210/1214 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • How do you send a named pipe string from umnanaged to managed code space?

    - by billmcf
    I appear to have a named pipes 101 issue. I have a very simple set up to connect a simplex named pipe transmitting from a C++ unmanaged app to a C# managed app. The pipe connects, but I cannot send a "message" through the pipe unless I close the handle which appears to flush the buffer and pass the message through. It's like the message is blocked. I have tried reversing the roles of client/server and invoking them with different Flag combinations without any luck. I can easily send messages in the other direction from C# managed to C++ unmanaged. Does anyone have any insight. Can any of you guys successfully send messages from C++ unmanaged to C# managed? I can find plenty of examples of intra amanged or unmanaged pipes but not inter managed to/from unamanged - just claims to be able to do it. In the listings, I have omitted much of the wrapper stuff for clarity. The key bits I believe that are relevant are the pipe connection/creation/read and write methods. Don't worry too much about blocking/threading here. C# Server side // This runs in its own thread and so it is OK to block private void ConnectToClient() { // This server will listen to the sending client if (m_InPipeStream == null) { m_InPipeStream = new NamedPipeServerStream("TestPipe", PipeDirection.In, 1); } // Wait for client to connect to our server m_InPipeStream.WaitForConnection(); // Verify client is running if (!m_InPipeStream.IsConnected) { return; } // Start listening for messages on the client stream if (m_InPipeStream != null && m_InPipeStream.CanRead) { ReadThread = new Thread(new ParameterizedThreadStart(Read)); ReadThread.Start(m_InPipeStream); } } // This runs in its own thread and so it is OK to block private void Read(object serverObj) { NamedPipeServerStream pipeStream = (NamedPipeServerStream)serverObj; using (StreamReader sr = new StreamReader(pipeStream)) { while (true) { string buffer = "" ; try { // Blocks here until the handle is closed by the client-side!! buffer = sr.ReadLine(); // <<<<<<<<<<<<<< Sticks here } catch { // Read error break; } // Client has disconnected? if (buffer == null || buffer.Length == 0) break; // Fire message received event if message is non-empty if (MessageReceived != null && buffer != "") { MessageReceived(buffer); } } } } C++ client side // Static - running in its own thread. DWORD CNamedPipe::ListenForServer(LPVOID arg) { // The calling app (this) is passed as the parameter CNamedPipe* app = (CNamedPipe*)arg; // Out-Pipe: connect as a client to a waiting server app->m_hOutPipeHandle = CreateFile("\\\\.\\pipe\\TestPipe", GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); // Could not create handle if (app->m_hInPipeHandle == NULL || app->m_hInPipeHandle == INVALID_HANDLE_VALUE) { return 1; } return 0; } // Sends a message to the server BOOL CNamedPipe::SendMessage(CString message) { DWORD dwSent; if (m_hOutPipeHandle == NULL || m_hOutPipeHandle == INVALID_HANDLE_VALUE) { return FALSE; } else { BOOL bOK = WriteFile(m_hOutPipeHandle, message, message.GetLength()+1, &dwSent, NULL); //FlushFileBuffers(m_hOutPipeHandle); // <<<<<<< Tried this return (!bOK || (message.GetLength()+1) != dwSent) ? FALSE : TRUE; } } // Somewhere in the Windows C++/MFC code... ... // This write is non-blocking. It just passes through having loaded the pipe. m_pNamedPipe->SendMessage("Hi de hi"); ...

    Read the article

  • how to obtain the relative path of a resource in a j2ee project

    - by Neeraj
    I have a Dynamic Web Project having a flat file (or say text file). I have created a servlet in which I need to use this file. My code is as following: protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // String resource = request.getParameter ("json") ; if ( resource != null && !resource.equals ( "" ) ) { //use getResourceAsStream ( ) to properly get the file. InputStream is = getServletContext ().getResourceAsStream ("rateJSON") ; if ( is != null ) { // the resource exists response.setContentType("application/json"); response.setHeader("Pragma", "No-cache"); response.setDateHeader("Expires", 0); response.setHeader("Cache-Control", "no-cache"); StringWriter sw = new StringWriter ( ) ; for ( int c = is.read ( ) ; c != -1; c = is.read ( ) ) { sw.write ( c ) ; } PrintWriter out = response.getWriter(); out.print (sw.toString ()) ; out.flush(); } } } The problem is that the InputStream is has null value. I'm not sure how to get the correct relative path. I'm using JBOSS as the app server. I have added the resource file in the WebContent directory of a Dynamic Web Project. As a different approch, I tried this: protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub ServletConfig config = getServletConfig(); String contextName = config.getInitParameter("ApplicationName"); System.out.println("context name"+ contextName); String contextPath = config.getServletContext().getRealPath(contextName); System.out.println("context Path"+contextPath); //contextPath = contextPath.substring(0, contextPath.indexOf(contextName)); contextPath += "\\rateJSON.txt"; System.out.println(contextPath); String resource = request.getParameter ("json") ; System.out.println("Hi there1"+resource); if ( resource != null && !resource.equals ( "" ) ) { System.out.println("Hi there"); //use getResourceAsStream ( ) to properly get the file. //InputStream is = getServletContext ().getResourceAsStream (resource) ; InputStream is = getServletConfig().getServletContext().getResourceAsStream(contextPath); if ( is != null ) { // the resource exists System.out.println("Hi there2"); response.setContentType("application/json"); response.setHeader("Pragma", "No-cache"); response.setDateHeader("Expires", 0); response.setHeader("Cache-Control", "no-cache"); StringWriter sw = new StringWriter ( ); for ( int c = is.read ( ) ; c != -1; c = is.read ( ) ) { sw.write ( c ) ; System.out.println(c); } PrintWriter out = response.getWriter(); out.print (sw.toString ()) ; System.out.println(sw.toString()); out.flush(); } } } The value of contextPath is now: C:\JBOSS\jboss-5.0.1.GA\server\default\tmp\4p72206b-uo5r7k-g0vn9pof-1-g0vsh0o9-b7\Nationwide.war\WEB-INF\rateJSON But at this location the rateJSON file is not there? It seems JBOSS is not putting this file in the App.war or doesn't deploy it??? Could someone please help me?

    Read the article

  • How to get the selected index of a dropdowlist with javascript

    - by rui martins
    I have a table with several @Html.dropdowlistfor in it. I was trying to read the selected value of using javascript, but all read is the html generated. How can I read it?? for (var i = 0; i < oTable.length; i++) { **userModel.Id = oTable[i][0];** regionModel.Users.push(userModel); processModel.Regions.push(regionModel); userModel = { "Id": "", "Name": ""}; regionModel = { "Id": "", "Name": "", "Users": []}; } TABLE <table class="tbl" id="tbl"> <thead> <tr> <th> Region </th> <th> Owner </th> </tr> </thead> <tbody> @if (Model != null) { foreach (var item in Model.Regions) { <tr> <td> @Html.DisplayTextFor(i => item.Name) </td> <td> @Html.DropDownListFor(i => item.Users, new SelectList(item.Users, "Id", "Name")) </td> </tr> } } </tbody> CODE function ProcessSave() { // Step 1: Read View Data and Create JSON Object var userModel = { "User": "", "Name": ""}; var regionModel = {"Region" : "","Name": "", "Users": []}; var processModel = { "User": "", "Description": "", "Code": "", "Regions": []}; processModel.Name = $("#Name").val(); processModel.Code = $("#Code").val(); processModel.Description = $("#Description").val(); var oTable = $('.tbl').dataTable().fnGetData(); for (var i = 0; i < oTable.length; i++) { regionModel.Name = oTable[i][0]; userModel.User = oTable[i][1]; userModel.Name = oTable[i][1]; regionModel.Users.push(userModel); processModel.Regions.push(regionModel); userModel = { "Id": "", "Name": ""}; regionModel = { "Name": "", "Users": []}; } // Step 1: Ends Here // Set 2: Ajax Post // Here i have used ajax post for saving/updating information $.ajax({ url: '/Process/Create', data: JSON.stringify(processModel), type: 'POST', contentType: 'application/json;', dataType: 'json', success: function (result) { if (result.Success == "1") { window.location.href = "/Process/Index"; } else { alert(result.ex); } } }); } MODELS namespace TestingTool.ViewModels { public partial class ProcessModel { public string Name { get; set; } public string Description { get; set; } public string Code { get; set; } public virtual ICollection<RegionModel> Regions { get; set; } } } namespace TestingTool.ViewModels { public class RegionModel { public int Region { get; set; } public string Name { get; set; } public virtual ICollection<UserModel> Users { get; set; } } } namespace TestingTool.ViewModels { public class UserModel { public int User{ get; set; } public string Name { get; set; } } }

    Read the article

  • Use of select or multithread for almost 80 or more clients?

    - by Tushar Goel
    I am working on one project in which i need to read from 80 or more clients and then write their o/p into a file continuously and then read these new data for another task. My question is what should i use select or multithreading? Also I tried to use multi threading using read/fgets and write/fputs call but as they are blocking calls and one operation can be performed at one time so it is not feasible. Any idea is much appreciated. update 1: I have tried to implement the same using condition variable. I able to achieve this but it is writing and reading one at a time.When another client tried to write then it cannot able to write unless i quit from the 1st thread. I do not understand this. This should work now. What mistake i am doing? Update 2: Thanks all .. I am able to succeeded to get this model implemented using mutex condition variable. updated Code is as below: **header file******* char *mailbox ; pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER ; pthread_cond_t writer = PTHREAD_COND_INITIALIZER; int main(int argc,char *argv[]) { pthread_t t1 , t2; pthread_attr_t attr; int fd, sock , *newfd; struct sockaddr_in cliaddr; socklen_t clilen; void *read_file(); void *update_file(); //making a server socket if((fd=make_server(atoi(argv[1])))==-1) oops("Unable to make server",1) //detaching threads pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED); ///opening thread for reading pthread_create(&t2,&attr,read_file,NULL); while(1) { clilen = sizeof(cliaddr); //accepting request sock=accept(fd,(struct sockaddr *)&cliaddr,&clilen); //error comparison against failire of request and INT if(sock==-1 && errno != EINTR) oops("accept",2) else if ( sock ==-1 && errno == EINTR) oops("Pressed INT",3) newfd = (int *)malloc(sizeof(int)); *newfd = sock; //creating thread per request pthread_create(&t1,&attr,update_file,(void *)newfd); } free(newfd); return 0; } void *read_file(void *m) { pthread_mutex_lock(&lock); while(1) { printf("Waiting for lock.\n"); pthread_cond_wait(&writer,&lock); printf("I am reading here.\n"); printf("%s",mailbox); mailbox = NULL ; pthread_cond_signal(&writer); } } void *update_file(int *m) { int sock = *m; int fs ; int nread; char buffer[BUFSIZ] ; if((fs=open("database.txt",O_RDWR))==-1) oops("Unable to open file",4) while(1) { pthread_mutex_lock(&lock); write(1,"Waiting to get writer lock.\n",29); if(mailbox != NULL) pthread_cond_wait(&writer,&lock); lseek(fs,0,SEEK_END); printf("Reading from socket.\n"); nread=read(sock,buffer,BUFSIZ); printf("Writing in file.\n"); write(fs,buffer,nread); mailbox = buffer ; pthread_cond_signal(&writer); pthread_mutex_unlock(&lock); } close(fs); }

    Read the article

  • Keeping socket open to send files on timer calls?

    - by user3704768
    I'm writing a program that requires an image to be fetched from a remote server every 10 milliseconds or so, as that's how often the image is updated. My current method calls a timer to grab the image, but it encounters Socket Closed errors all the time, and sometimes does not work at all. How can I fix my methods to keep the socket open the whole time, so no reconnecting is needed? Here is the full class: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.ServerSocket; import java.net.Socket; import java.net.UnknownHostException; import javax.swing.Timer; public class Connection { public static void createServer() throws IOException { Capture.getScreen(); ServerSocket socket = null; try { socket = new ServerSocket(12345, 0, InetAddress.getByName("127.0.0.1")); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } System.out.println("Server started on " + socket.getInetAddress().getHostAddress() + ":" + socket.getLocalPort() + ",\nWaiting for client to connect."); final Socket clientConnection = socket.accept(); System.out.println("Client accepted from " + clientConnection.getInetAddress().getHostAddress() + ", sending file"); ActionListener taskPerformer = new ActionListener() { public void actionPerformed(ActionEvent evt) { System.out.println("Sending File"); try { pipeStreams(new FileInputStream(new File( "captures/sCap.png")), clientConnection.getOutputStream(), 1024); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } }; System.out.println("closing out connection"); try { clientConnection.close(); } catch (IOException e) { e.printStackTrace(); } try { socket.close(); } catch (IOException e) { e.printStackTrace(); } Timer timer = new Timer(10, taskPerformer); timer.setRepeats(true); timer.start(); } public static void createClient() throws IOException { System.out.println("Connecting to server."); final Socket socket = new Socket(); try { socket.connect(new InetSocketAddress(InetAddress .getByName("127.0.0.1"), 12345)); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { } ActionListener taskPerformer = new ActionListener() { public void actionPerformed(ActionEvent evt) { System.out.println("Success, retreiving file."); try { pipeStreams(socket.getInputStream(), new FileOutputStream( new File("captures/rCap.png")), 1024); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { } } }; System.out.println("Closing connection"); try { socket.close(); } catch (IOException e) { e.printStackTrace(); } Timer timer = new Timer(10, taskPerformer); timer.setRepeats(true); timer.start(); } public static void pipeStreams(java.io.InputStream source, java.io.OutputStream destination, int bufferSize) throws IOException { byte[] buffer = new byte[bufferSize]; int read = 0; while ((read = source.read(buffer)) != -1) { destination.write(buffer, 0, read); } destination.flush(); destination.close(); source.close(); } }

    Read the article

  • Has Javascript developed beyond what it was originally designed to do?

    - by Elliot Bonneville
    I've been talking with a friend about the purpose of Javascript, when and how it should be used, etc. He quoted that: JavaScript was designed to add interactivity to HTML pages [...] JavaScript gives HTML designers a programming tool HTML authors are normally not programmers, but JavaScript is a scripting language with a very simple syntax! Almost anyone can put small "snippets" of code into their HTML pages JavaScript can react to events A JavaScript can be set to execute when something happens, like when a page has finished loading or when a user clicks on an HTML element JavaScript can read and write HTML elements A JavaScript can read and change the content of an HTML element JavaScript can be used to validate data A JavaScript can be used to validate form data before it is submitted to a server. This saves the server from extra processing JavaScript can be used to detect the visitor's browser - A JavaScript can be used to detect the visitor's browser, and - depending on the browser - load another page specifically designed for that browser. JavaScript can be used to create cookies - A JavaScript can be used to store and retrieve information on the visitor's computer. However, it seems like Javascript's getting used to do a lot more than these days. My friend also advocates against using Javascript's OOP functionality, claiming that "you shouldn't be processing data, merely validating." Is Javascript really limited to validating data and making flashy graphics on a web page? He goes on to claim "you shouldn't be attempting to access databases through javascript" and also says " in general you don't want to be doing your heavy lifting in javascript". I can't say I agree with his opinion, but I'd like to get some more input on this. So, my question: Has Javascript evolved from the definition above to something more powerful, has the way we use it changed, or am I just plain wrong? While I realize this is a subjective question, I can't find any more information on it, so a few links would be good, if nothing else. I'm not looking for a debate, just an answer.

    Read the article

  • DirectX11 CreateWICTextureFromMemory Using PNG

    - by seethru
    I've currently got textures loading using CreateWICTextureFromFile however I'd like a little more control over it, and I'd like to store images in their byte form in a resource loader. Below is just two sets of test code that return two separate results and I'm looking for any insight into a possible solution. ID3D11ShaderResourceView* srv; std::basic_ifstream<unsigned char> file("image.png", std::ios::binary); file.seekg(0,std::ios::end); int length = file.tellg(); file.seekg(0,std::ios::beg); unsigned char* buffer = new unsigned char[length]; file.read(&buffer[0],length); file.close(); HRESULT hr; hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), &buffer[0], sizeof(buffer), nullptr, &srv, NULL); As a return for the above code I get Component not found. std::ifstream file; ID3D11ShaderResourceView* srv; file.open("../Assets/Textures/osg.png", std::ios::binary); file.seekg(0,std::ios::end); int length = file.tellg(); file.seekg(0,std::ios::beg); std::vector<char> buffer(length); file.read(&buffer[0],length); file.close(); HRESULT hr; hr = DirectX::CreateWICTextureFromMemory(_D3D->GetDevice(), _D3D->GetDeviceContext(), (const uint8_t*)&buffer[0], sizeof(buffer), nullptr, &srv, NULL); The above code returns that the image format is unknown. I'm clearly doing something wrong here, any help is greatly appreciated. Tried finding anything even similar on stackoverflow, and google to no avail.

    Read the article

  • Insufficient Permissions Problems with MSDeploy and TFS Build 2010

    - by jdanforth
    I ran into these problems on a TFS 2010 RC setup where I wanted to deploy a web site as part of the nightly build: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed.(An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'.)  An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'. Filename: \\?\C:\Windows\system32\inetsrv\config\redirection.config Error: Cannot read configuration file due to insufficient permissions  As you can see I’m running the build service as NETWORK SERVICE which is quite usual. The first thing I did then was to give NETWORK SERVICE read access to the whole directory where redirection.config is sitting; C:\Windows\system32\inetsrv\config. That gave me a new error: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed. (Attempted to perform an unauthorized operation.) The reason for this problem was that NETWORK SERVICE didn’t have write permission to the place where I’ve told MSDeploy to put the web site physically on the disk. Once I’d given the NETWORK SERVICE the right permissions, MSDeploy completed as expected! NOTE! I’ve not had this problem with TFS 2010 RTM, so it might be just a RC issue!

    Read the article

  • Export data to Excel from Silverlight/WPF DataGrid

    - by outcoldman
    Data export from DataGrid to Excel is very common task, and it can be solved with different ways, and chosen way depend on kind of app which you are design. If you are developing app for enterprise, and it will be installed on several computes, then you can to advance a claim (system requirements) with which your app will be work for client. Or customer will advance system requirements on which your app should work. In this case you can use COM for export (use infrastructure of Excel or OpenOffice). This approach will give you much more flexibility and give you possibility to use all features of Excel app. About this approach I’ll speak below. Other way – your app is for personal use, it can be installed on any home computer, in this case it is not good to ask user to install MS Office or OpenOffice just for using your app. In this way you can use foreign tools for export, or export to xml/html format which MS Office can read (this approach used by JIRA). But in this case will be more difficult to satisfy user tasks, like create document with landscape rotation and with defined fields for printing. At this article I'll show you how to work with Excel object from .NET 4 and Silverlight 4 with dynamic objects and give you an approach which allow you to export data from DataGrid Silverlight and WPF controls. Read more...

    Read the article

  • SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010

    - by pinaldave
    This is report to my third of very successful seminar event on SQL Server Performance Tuning. SQL Server Performance Tuning Seminar in Colombo was oversubscribed with total of 35 attendees. You can read the details over here SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010. SQL Server Performance Tuning Seminar in Hyderabad was oversubscribed with total of 25 attendees. You can read the details over here SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010. The same Seminar was offered in Pune on December 4,-5, 2010. We had another successful seminar with lots of performance talk. This seminar was attended by 30 attendees. The best part of the seminar was that along with the our agenda, we have talked about following very interesting concepts. Deadlocks Detection and Removal Dynamic SQL and Inline Code SQL Optimizations Multiple OR conditions and performance tuning Dynamic Search Condition Building and Improvement Memory Cache and Improvement Bottleneck Detections – Memory, CPU and IO Beginning Performance Tuning on Production Parametrization Improving already Super Fast Queries Convenience vs. Performance Proper way to create Indexes Hints and Disadvantages I had great time doing the seminar and sharing my performance tricks with all. The highlight of this seminar was I have explained the attendees, how I begin doing performance tuning when I go for Performance Tuning Consultations.   Pinal Dave at SQL Performance Tuning Seminar SQL Server Performance Tuning Seminar Pinal Dave at SQL Performance Tuning Seminar Pinal Dave at SQL Performance Tuning Seminar SQL Server Performance Tuning Seminar SQL Server Performance Tuning Seminar This seminar series are 100% demo oriented and no usual PowerPoint talk. They are created from my experiences of various organizations for performance tuning. I am not planning any more seminar this year as it was great but I am booked currently for next 60 days at various performance tuning engagements. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Pinal Dave: Blogger, MVP and now Interviewee by Michael J Swart

    - by pinaldave
    Michael J. Swart is a very unique person. I have often exchanged emails with him and also used a couple of his scripts in my presentations (with his permission). Every time I conduct spatial database presentation, I always start with his script where he has drawn the wonderful image of Botticelli’s Birth of Venus. I often think he is more of a creative artist than IT professional. However, if you read his blog posts and articles, they are top notch and each article is as creative as his caricatures. He is wonderful, inspiring, creative and most importantly, very humble. He recently took my interview and asked me some very interesting question. To answer his question, I had to share some of the interesting aspects of my life which I have had never shared in any interview before. He made me share the following interesting facts. Pinal Dave Caricatures Read my Interview Here are a few questions that I have answered at his blog: How I met my wife? Best moments of my life? How to pronounce my last name? Who inspired me? English as a Third Language. I am also thankful to Michael for drawing my caricature. I really liked it and I am very glad that he took time to do so. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Google Reader Play – Reading redefined

    - by samsudeen
    “Google Reader Play” is the new Web browsing feature launched by Google on Wednesday which allows users to browse and explore the content in Google reader  like a TV rather than the hierarchical tree view.  Google reader finds and displays the coolest things on the net using the same “Recommended Items”  feature in the Google Reader. if you are a Google user then it tries to filter the content based upon the “Items that several of your friends have shared” and “based upon your past reader History” “Google Reader Play” makes the personalization of content automation by allowing the users to mark , like and share items as shown below It also allows you to personalize the content by choosing the from the list of available categories The interface looks simple and and now users can feel reading news is like watching TV.This is what what  Google is saying about it In Google Reader Play, items are presented one at a time, and each item is big and full-screen. After you’ve read an item, just click the next arrow to move to the next one, or click any item on the filmstrip below to fast-forward. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • SQLAuthority News – SQL Server Cheat Sheet from MidnightDBA

    - by pinaldave
    When I read the article from MidnightDBA (I should say MidnightDBAs because it is about Jen and Sean) regarding T-SQL for the Absentminded DBA, my natural reaction was that it is a perfect extension. A year ago around the same month, I had created SQL Server Cheatsheet. I have distributed a lot of copies of it since I produced it. In fact, while attending TechMela in Nepal today, I am getting many requests to get copies of SQL Server Cheatsheet. When I checked my RSS feed, I realized that Jen and Sean have a perfect cheat sheet for intermediate level developers. I would like to suggest to all of you to read their post and download the Absentminded DBA’s Cheat Sheet for IntermediateTSQL. It is available in two formats: PDF and Docx. I just love how the members of the community help each other grow. I am fortunate that I have received excellent feedback/corrections and criticism on my blog posts for so many times. Criticism and corrections, after all, are absolutely needed and make a better community as a whole. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SQL Cheat Sheet

    Read the article

  • Using a WPF ListView as a DataGrid

    - by psheriff
    Many people like to view data in a grid format of rows and columns. WPF did not come with a data grid control that automatically creates rows and columns for you based on the object you pass it. However, the WPF Toolkit can be downloaded from CodePlex.com that does contain a DataGrid control. This DataGrid gives you the ability to pass it a DataTable or a Collection class and it will automatically figure out the columns or properties and create all the columns for you and display the data.The DataGrid control also supports editing and many other features that you might not always need. This means that the DataGrid does take a little more time to render the data. If you want to just display data (see Figure 1) in a grid format, then a ListView works quite well for this task. Of course, you will need to create the columns for the ListView, but with just a little generic code, you can create the columns on the fly just like the WPF Toolkit’s DataGrid. Figure 1: A List of Data using a ListView A Simple ListView ControlThe XAML below is what you would use to create the ListView shown in Figure 1. However, the problem with using XAML is you have to pre-define the columns. You cannot re-use this ListView except for “Product” data. <ListView x:Name="lstData"          ItemsSource="{Binding}">  <ListView.View>    <GridView>      <GridViewColumn Header="Product ID"                      Width="Auto"               DisplayMemberBinding="{Binding Path=ProductId}" />      <GridViewColumn Header="Product Name"                      Width="Auto"               DisplayMemberBinding="{Binding Path=ProductName}" />      <GridViewColumn Header="Price"                      Width="Auto"               DisplayMemberBinding="{Binding Path=Price}" />    </GridView>  </ListView.View></ListView> So, instead of creating the GridViewColumn’s in XAML, let’s learn to create them in code to create any amount of columns in a ListView. Create GridViewColumn’s From Data TableTo display multiple columns in a ListView control you need to set its View property to a GridView collection object. You add GridViewColumn objects to the GridView collection and assign the GridView to the View property. Each GridViewColumn object needs to be bound to a column or property name of the object that the ListView will be bound to. An ADO.NET DataTable object contains a collection of columns, and these columns have a ColumnName property which you use to bind to the GridViewColumn objects. Listing 1 shows a sample of reading and XML file into a DataSet object. After reading the data a GridView object is created. You can then loop through the DataTable columns collection and create a GridViewColumn object for each column in the DataTable. Notice the DisplayMemberBinding property is set to a new Binding to the ColumnName in the DataTable. C#private void FirstSample(){  // Read the data  DataSet ds = new DataSet();  ds.ReadXml(GetCurrentDirectory() + @"\Xml\Product.xml");    // Create the GridView  GridView gv = new GridView();   // Create the GridView Columns  foreach (DataColumn item in ds.Tables[0].Columns)  {    GridViewColumn gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.ColumnName);    gvc.Header = item.ColumnName;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   // Setup the GridView Columns  lstData.View = gv;  // Display the Data  lstData.DataContext = ds.Tables[0];} VB.NETPrivate Sub FirstSample()  ' Read the data  Dim ds As New DataSet()  ds.ReadXml(GetCurrentDirectory() & "\Xml\Product.xml")   ' Create the GridView  Dim gv As New GridView()   ' Create the GridView Columns  For Each item As DataColumn In ds.Tables(0).Columns    Dim gvc As New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.ColumnName)    gvc.Header = item.ColumnName    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   ' Setup the GridView Columns  lstData.View = gv  ' Display the Data  lstData.DataContext = ds.Tables(0)End SubListing 1: Loop through the DataTable columns collection to create GridViewColumn objects A Generic Method for Creating a GridViewInstead of having to write the code shown in Listing 1 for each ListView you wish to create, you can create a generic method that given any DataTable will return a GridView column collection. Listing 2 shows how you can simplify the code in Listing 1 by setting up a class called WPFListViewCommon and create a method called CreateGridViewColumns that returns your GridView. C#private void DataTableSample(){  // Read the data  DataSet ds = new DataSet();  ds.ReadXml(GetCurrentDirectory() + @"\Xml\Product.xml");   // Setup the GridView Columns  lstData.View =      WPFListViewCommon.CreateGridViewColumns(ds.Tables[0]);  lstData.DataContext = ds.Tables[0];} VB.NETPrivate Sub DataTableSample()  ' Read the data  Dim ds As New DataSet()  ds.ReadXml(GetCurrentDirectory() & "\Xml\Product.xml")   ' Setup the GridView Columns  lstData.View = _      WPFListViewCommon.CreateGridViewColumns(ds.Tables(0))  lstData.DataContext = ds.Tables(0)End SubListing 2: Call a generic method to create GridViewColumns. The CreateGridViewColumns MethodThe CreateGridViewColumns method will take a DataTable as a parameter and create a GridView object with a GridViewColumn object in its collection for each column in your DataTable. C#public static GridView CreateGridViewColumns(DataTable dt){  // Create the GridView  GridView gv = new GridView();  gv.AllowsColumnReorder = true;   // Create the GridView Columns  foreach (DataColumn item in dt.Columns)  {    GridViewColumn gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.ColumnName);    gvc.Header = item.ColumnName;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns _  (ByVal dt As DataTable) As GridView  ' Create the GridView  Dim gv As New GridView()  gv.AllowsColumnReorder = True   ' Create the GridView Columns  For Each item As DataColumn In dt.Columns    Dim gvc As New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.ColumnName)    gvc.Header = item.ColumnName    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd FunctionListing 3: The CreateGridViewColumns method takes a DataTable and creates GridViewColumn objects in a GridView. By separating this method out into a class you can call this method anytime you want to create a ListView with a collection of columns from a DataTable. SummaryIn this blog you learned how to create a ListView that acts like a DataGrid. You are able to use a DataTable as both the source of the data, and for creating the columns for the ListView. In the next blog entry you will learn how to use the same technique, but for Collection classes. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning Training

    - by pinaldave
    Last 3 days to register for the courses. This is one time offer with big discount. The deadline for the course registration is 5th May, 2010. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 2 days Download the complete PDF brochure. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at pinal “at” SQLAuthority.com for any additional information and clarification. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. Additionally there is special program of SolidQ India Insider. This is only available to first few registrants of the courses only. Read more details about the course here. Read my TechEd India 2010 experience here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Parsing HTML Documents with the Html Agility Pack

    Screen scraping is the process of programmatically accessing and processing information from an external website. For example, a price comparison website might screen scrape a variety of online retailers to build a database of products and what various retailers are selling them for. Typically, screen scraping is performed by mimicking the behavior of a browser - namely, by making an HTTP request from code and then parsing and analyzing the returned HTML. The .NET Framework offers a variety of classes for accessing data from a remote website, namely the WebClient class and the HttpWebRequest class. These classes are useful for making an HTTP request to a remote website and pulling down the markup from a particular URL, but they offer no assistance in parsing the returned HTML. Instead, developers commonly rely on string parsing methods like String.IndexOf, String.Substring, and the like, or through the use of regular expressions. Another option for parsing HTML documents is to use the Html Agility Pack, a free, open-source library designed to simplify reading from and writing to HTML documents. The Html Agility Pack constructs a Document Object Model (DOM) view of the HTML document being parsed. With a few lines of code, developers can walk through the DOM, moving from a node to its children, or vice versa. Also, the Html Agility Pack can return specific nodes in the DOM through the use of XPath expressions. (The Html Agility Pack also includes a class for downloading an HTML document from a remote website; this means you can both download and parse an external web page using the Html Agility Pack.) This article shows how to get started using the Html Agility Pack and includes a number of real-world examples that illustrate this library's utility. A complete, working demo is available for download at the end of this article. Read on to learn more! Read More >

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • SQL SERVER – Solution – Challenge – Puzzle – Usage of FAST Hint

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same from Brad Schulz. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Challenge – Puzzle – Usage of FAST Hint The question was in what condition the hint FAST will be useful. In the response to this puzzle blog post here is what SQL Server Expert Brad Schulz has pointed me to his blog post where he explain how FAST hint can be useful. I strongly recommend to read his blog post over here. With the permission of the Brad, I am reproducing following queries here. He has come up with example where FAST hint improves the performance. USE AdventureWorks GO DECLARE @DesiredDateAtMidnight DATETIME = '20010709' DECLARE @NextDateAtMidnight DATETIME = DATEADD(DAY,1,@DesiredDateAtMidnight) -- Query without FAST SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID; -- Query with FAST(10) SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID OPTION(FAST 10) Now when you check the execution plan for the same, you will find following visible difference. You will find query with FAST returns results with much lower cost. Thank you Brad for excellent post and teaching us something. I request all of you to read original blog post written by Brad for much more information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A Look at the GridView's New Sorting Styles in ASP.NET 4.0

    Like every Web control in the ASP.NET toolbox, the GridView includes a variety of style-related properties, including CssClass, Font, ForeColor, BackColor, Width, Height, and so on. The GridView also includes style properties that apply to certain classes of rows in the grid, such as RowStyle, AlternatingRowStyle, HeaderStyle, and PagerStyle. Each of these meta-style properties offer the standard style properties (CssClass, Font, etc.) as subproperties. In ASP.NET 4.0, Microsoft added four new style properties to the GridView control: SortedAscendingHeaderStyle, SortedAscendingCellStyle, SortedDescendingHeaderStyle, and SortedDescendingCellStyle. These four properties are meta-style properties like RowStyle and HeaderStyle, but apply to column of cells rather than a row. These properties only apply when the GridView is sorted - if the grid's data is sorted in ascending order then the SortedAscendingHeaderStyle and SortedAscendingCellStyle properties define the styles for the column the data is sorted by. The SortedDescendingHeaderStyle and SortedDescendingCellStyle properties apply to the sorted column when the results are sorted in descending order. These four new properties make it easier to customize the appearance of the column by which the data is sorted. Using these properties along with a touch of Cascading Style Sheets (CSS) it is possible to add up and down arrows to the sorted column's header to indicate whether the data is sorted in ascending or descending order. Likewise, these properties can be used to shade the sorted column or make its text bold. This article shows how to use these four new properties to style the sorted column. Read on to learn more! Read More >

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • JD Edwards in the Cloud…Really Already!

    - by user709270
    Yes, there is a lot of conversation about Oracle and the cloud.  Many of you may assume that Oracle applications in the cloud  only apply to Oracle Fusion Applications.  And JD Edwards customers are curious about if, when and how JD Edwards might be offered to them as a subscription offering.  The truth of the matter is that Oracle partners today are providing a JD Edwards subscription offering.  In order to help you understand what’s available, please read on for the reader’s digest version! Let’s start with a definition.  JD Edwards EnterpriseOne is available as an Accelerate subscription.  Oracle “Accelerate” subscription is Oracle's approach for providing simple to deploy, packaged, enterprise-class software solutions to growing midsize organizations through its network of expert partners. The partners that offer Oracle  JD Edwards Accelerate Subscriptions do so via their Partner Private Clouds (PPC).  The Oracle JD Edwards cloud solutions are offered only by qualified Oracle JD Edwards partners and they provide customers a complete Oracle solution that includes license software, maintenance, hosting and other services on a monthly subscription basis.  Qualified partners must be members of Oracle PartnerNetwork, be an Oracle Accelerate solutions provider and be enabled to deliver JD Edwards applications via Oracle Business Accelerator rapid implementation technology.  Currently we have many JD Edwards partners around the globe that offer the JD Edwards Accelerate Subscription model.  To access a list of Oracle JD Edwards partners currently in this program click here.  To learn more about Oracle JD Edwards Cloud Computing read this recently published white paper:   Oracle JD Edwards Cloud Computing. Choosing a deployment strategy that fits

    Read the article

  • Html Agility Pack for Reading “Real World” HTML

    - by WeigeltRo
    In an ideal world, all data you need from the web would be available via well-designed services. In the real world you sometimes have to scrape the data off a web page. Ugly, dirty – but if you really want that data, you have no choice. Just don’t write (yet another) HTML parser. I stumbled across the Html Agility Pack (HAP) a long time ago, but just now had the need for a robust way to read HTML. A quote from the website: This is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...). It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams). Using the HAP was a simple matter of getting the Nuget package, taking a look at the example and dusting off some of my XPath knowledge from years ago. The documentation on the Codeplex site is non-existing, but if you’ve queried a DOM or used XPath or XSLT before you shouldn’t have problems finding your way around using Intellisense (ReSharper tip: Press Ctrl+Shift+F1 on class members for reading the full doc comments).

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Method extension for safely type convert

    - by outcoldman
    Recently I read good Russian post with many interesting extensions methods after then I remembered that I too have one good extension method “Safely type convert”. Idea of this method I got at last job. We often write code like this: int intValue; if (obj == null || !int.TryParse(obj.ToString(), out intValue)) intValue = 0; This is method how to safely parse object to int. Of course will be good if we will create some unify method for safely casting. I found that better way is to create extension methods and use them then follows: int i; i = "1".To<int>(); // i == 1 i = "1a".To<int>(); // i == 0 (default value of int) i = "1a".To(10); // i == 10 (set as default value 10) i = "1".To(10); // i == 1 // ********** Nullable sample ************** int? j; j = "1".To<int?>(); // j == 1 j = "1a".To<int?>(); // j == null j = "1a".To<int?>(10); // j == 10 j = "1".To<int?>(10); // j == 1 Read more... (redirect to http://outcoldman.ru)

    Read the article

  • Apache2 on Raspbian: Multiviews is enabled but not working

    - by Christian L
    I recently moved webserver, from a ubuntuserver set up by my brother (I have sudo) to a rasbianserver set up by my self. On the other server multiviews worked out of the box, but on the raspbian it does not seem to work althoug it seems to be enabled out of the box there as well. What I am trying to do is to get it to find my.doma.in/mobile.php when I enter my.doma.in/mobile in the adress field. I am using the same available-site-file as I did before, the file looks as this: <VirtualHost *:80> ServerName my.doma.in ServerAdmin [email protected] DocumentRoot /home/christian/www/do <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/christian/www/do> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> From what I have read various places while googling this issue I found that the negotiation module had to be enabled so I tried to enable it. sudo a2enmod negotiation Giving me this result Module negotiation already enabled I have read through the /etc/apache2/apache2.conf and I did not find anything in particular that seemed to be helping me there, but please do ask if you think I should post it. Any ideas on how to solve this through getting Multiviews to work?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >