Search Results

Search found 17466 results on 699 pages for 'processing org'.

Page 196/699 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • jQuery, ASP.NET, and Browser History

    - by Stephen Walther
    One objection that people always raise against Ajax applications concerns browser history. Because an Ajax application updates its content by performing sneaky Ajax postbacks, the browser backwards and forwards buttons don’t work as you would normally expect. In a normal, non-Ajax application, when you click the browser back button, you return to a previous state of the application. For example, if you are paging through a set of movie records, you might return to the previous page of records. In an Ajax application, on the other hand, the browser backwards and forwards buttons do not work as you would expect. If you navigate to the second page in a list of records and click the backwards button, you won’t return to the previous page. Most likely, you will end up navigating away from the application entirely (which is very unexpected and irritating). Bookmarking presents a similar problem. You cannot bookmark a particular page of records in an Ajax application because the address bar does not reflect the state of the application. The Ajax Solution There is a solution to both of these problems. To solve both of these problems, you must take matters into your own hands and take responsibility for saving and restoring your application state yourself. Furthermore, you must ensure that the address bar gets updated to reflect the state of your application. In this blog entry, I demonstrate how you can take advantage of a jQuery library named bbq that enables you to control browser history (and make your Ajax application bookmarkable) in a cross-browser compatible way. The JavaScript Libraries In this blog entry, I take advantage of the following four JavaScript files: jQuery-1.4.2.js – The jQuery library. Available from the Microsoft Ajax CDN at http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js jquery.pager.js – Used to generate pager for navigating records. Available from http://plugins.jquery.com/project/Pager microtemplates.js – John Resig’s micro-templating library. Available from http://ejohn.org/blog/javascript-micro-templating/ jquery.ba-bbq.js – The Back Button and Query (BBQ) Library. Available from http://benalman.com/projects/jquery-bbq-plugin/ All of these libraries, with the exception of the Micro-templating library, are available under the MIT open-source license. The Ajax Application Let’s start by building a simple Ajax application that enables you to page through a set of movie database records, 3 records at a time. We’ll use my favorite database named MoviesDB. This database contains a Movies table that looks like this: We’ll create a data model for this database by taking advantage of the ADO.NET Entity Framework. The data model looks like this: Finally, we’ll expose the data to the universe with the help of a WCF Data Service named MovieService.svc. The code for the data service is contained in Listing 1. Listing 1 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } The WCF Data Service in Listing 1 exposes the movies so that you can query the movie database table with URLs that looks like this: http://localhost:2474/MovieService.svc/Movies -- Returns all movies http://localhost:2474/MovieService.svc/Movies?$top=5 – Returns 5 movies The HTML page in Listing 2 enables you to page through the set of movies retrieved from the WCF Data Service. Listing 2 – Original.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Movies with History</title> <link href="Design/Pager.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Page <span id="pageNumber"></span> of <span id="pageCount"></span></h1> <div id="pager"></div> <br style="clear:both" /><br /> <div id="moviesContainer"></div> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> <script src="App_Scripts/jquery.pager.js" type="text/javascript"></script> <script type="text/javascript"> var pageSize = 3, pageIndex = 0; // Show initial page of movies showMovies(); function showMovies() { // Build OData query var query = "/MovieService.svc" // base URL + "/Movies" // top-level resource + "?$skip=" + pageIndex * pageSize // skip records + "&$top=" + pageSize // take records + " &$inlinecount=allpages"; // include total count of movies // Make call to WCF Data Service $.ajax({ dataType: "json", url: query, success: showMoviesComplete }); } function showMoviesComplete(result) { // unwrap results var movies = result["d"]["results"]; var movieCount = result["d"]["__count"] // Show movies using template var showMovie = tmpl("<li><%=Id%> - <%=Title %></li>"); var html = ""; for (var i = 0; i < movies.length; i++) { html += showMovie(movies[i]); } $("#moviesContainer").html(html); // show pager $("#pager").pager({ pagenumber: (pageIndex + 1), pagecount: Math.ceil(movieCount / pageSize), buttonClickCallback: selectPage }); // Update page number and page count $("#pageNumber").text(pageIndex + 1); $("#pageCount").text(movieCount); } function selectPage(pageNumber) { pageIndex = pageNumber - 1; showMovies(); } </script> </body> </html> The page in Listing 3 has the following three functions: showMovies() – Performs an Ajax call against the WCF Data Service to retrieve a page of movies. showMoviesComplete() – When the Ajax call completes successfully, this function displays the movies by using a template. This function also renders the pager user interface. selectPage() – When you select a particular page by clicking on a page number in the pager UI, this function updates the current page index and calls the showMovies() function. Figure 1 illustrates what the page looks like when it is opened in a browser. Figure 1 If you click the page numbers then the browser history is not updated. Clicking the browser forward and backwards buttons won’t move you back and forth in browser history. Furthermore, the address displayed in the address bar does not change when you navigate to different pages. You cannot bookmark any page except for the first page. Adding Browser History The Back Button and Query (bbq) library enables you to add support for browser history and bookmarking to a jQuery application. The bbq library supports two important methods: jQuery.bbq.pushState(object) – Adds state to browser history. jQuery.bbq.getState(key) – Gets state from browser history. The bbq library also supports one important event: hashchange – This event is raised when the part of an address after the hash # is changed. The page in Listing 3 demonstrates how to use the bbq library to add support for browser navigation and bookmarking to an Ajax page. Listing 3 – Default.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Movies with History</title> <link href="Design/Pager.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Page <span id="pageNumber"></span> of <span id="pageCount"></span></h1> <div id="pager"></div> <br style="clear:both" /><br /> <div id="moviesContainer"></div> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/jquery.ba-bbq.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> <script src="App_Scripts/jquery.pager.js" type="text/javascript"></script> <script type="text/javascript"> var pageSize = 3, pageIndex = 0; $(window).bind('hashchange', function (e) { pageIndex = e.getState("pageIndex") || 0; pageIndex = parseInt(pageIndex); showMovies(); }); $(window).trigger('hashchange'); function showMovies() { // Build OData query var query = "/MovieService.svc" // base URL + "/Movies" // top-level resource + "?$skip=" + pageIndex * pageSize // skip records + "&$top=" + pageSize // take records +" &$inlinecount=allpages"; // include total count of movies // Make call to WCF Data Service $.ajax({ dataType: "json", url: query, success: showMoviesComplete }); } function showMoviesComplete(result) { // unwrap results var movies = result["d"]["results"]; var movieCount = result["d"]["__count"] // Show movies using template var showMovie = tmpl("<li><%=Id%> - <%=Title %></li>"); var html = ""; for (var i = 0; i < movies.length; i++) { html += showMovie(movies[i]); } $("#moviesContainer").html(html); // show pager $("#pager").pager({ pagenumber: (pageIndex + 1), pagecount: Math.ceil(movieCount / pageSize), buttonClickCallback: selectPage }); // Update page number and page count $("#pageNumber").text(pageIndex + 1); $("#pageCount").text(movieCount); } function selectPage(pageNumber) { pageIndex = pageNumber - 1; $.bbq.pushState({ pageIndex: pageIndex }); } </script> </body> </html> Notice the first chunk of JavaScript code in Listing 3: $(window).bind('hashchange', function (e) { pageIndex = e.getState("pageIndex") || 0; pageIndex = parseInt(pageIndex); showMovies(); }); $(window).trigger('hashchange'); When the hashchange event occurs, the current pageIndex is retrieved by calling the e.getState() method. The value is returned as a string and the value is cast to an integer by calling the JavaScript parseInt() function. Next, the showMovies() method is called to display the page of movies. The $(window).trigger() method is called to raise the hashchange event so that the initial page of records will be displayed. When you click a page number, the selectPage() method is invoked. This method adds the current page index to the address by calling the following method: $.bbq.pushState({ pageIndex: pageIndex }); For example, if you click on page number 2 then page index 1 is saved to the URL. The URL looks like this: Notice that when you click on page 2 then the browser address is updated to look like: /Default.htm#pageIndex=1 If you click on page 3 then the browser address is updated to look like: /Default.htm#pageIndex=2 Because the browser address is updated when you navigate to a new page number, the browser backwards and forwards button will work to navigate you backwards and forwards through the page numbers. When you click page 2, and click the backwards button, you will navigate back to page 1. Furthermore, you can bookmark a particular page of records. For example, if you bookmark the URL /Default.htm#pageIndex=1 then you will get the second page of records whenever you open the bookmark. Summary You should not avoid building Ajax applications because of worries concerning browser history or bookmarks. By taking advantage of a JavaScript library such as the bbq library, you can make your Ajax applications behave in exactly the same way as a normal web application.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • R Package Installation with Oracle R Enterprise

    - by Sherry LaMonica-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE Programming languages give developers the opportunity to write reusable functions and to bundle those functions into logical deployable entities. In R, these are called packages. R has thousands of such packages provided by an almost equally large group of third-party contributors. To allow others to benefit from these packages, users can share packages on the CRAN system for use by the vast R development community worldwide. R's package system along with the CRAN framework provides a process for authoring, documenting and distributing packages to millions of users. In this post, we'll illustrate the various ways in which such R packages can be installed for use with R and together with Oracle R Enterprise. In the following, the same instructions apply when using either open source R or Oracle R Distribution. In this post, we cover the following package installation scenarios for: R command line Linux shell command line Use with Oracle R Enterprise Installation on Exadata or RAC Installing all packages in a CRAN Task View Troubleshooting common errors 1. R Package Installation BasicsR package installation basics are outlined in Chapter 6 of the R Installation and Administration Guide. There are two ways to install packages from the command line: from the R command line and from the shell command line. For this first example on Oracle Linux using Oracle R Distribution, we’ll install the arules package as root so that packages will be installed in the default R system-wide location where all users can access it, /usr/lib64/R/library.Within R, using the install.packages function always attempts to install the latest version of the requested package available on CRAN:R> install.packages("arules")If the arules package depends upon other packages that are not already installed locally, the R installer automatically downloads and installs those required packages. This is a huge benefit that frees users from the task of identifying and resolving those dependencies.You can also install R from the shell command line. This is useful for some packages when an internet connection is not available or for installing packages not uploaded to CRAN. To install packages this way, first locate the package on CRAN and then download the package source to your local machine. For example:$ wget http://cran.r-project.org/src/contrib/arules_1.1-2.tar.gz Then, install the package using the command R CMD INSTALL:$ R CMD INSTALL arules_1.1-2.tar.gzA major difference between installing R packages using the R package installer at the R command line and shell command line is that package dependencies must be resolved manually at the shell command line. Package dependencies are listed in the Depends section of the package’s CRAN site. If dependencies are not identified and installed prior to the package’s installation, you will see an error similar to:ERROR: dependency ‘xxx’ is not available for package ‘yyy’As a best practice and to save time, always refer to the package’s CRAN site to understand the package dependencies prior to attempting an installation. If you don’t run R as root, you won’t have permission to write packages into the default system-wide location and you will be prompted to create a personal library accessible by your userid. You can accept the personal library path chosen by R, or specify the library location by passing parameters to the install.packages function. For example, to create an R package repository in your home directory: R> install.packages("arules", lib="/home/username/Rpackages")or$ R CMD INSTALL arules_1.1-2.tar.gz --library=/home/username/RpackagesRefer to the install.packages help file in R or execute R CMD INSTALL --help at the shell command line for a full list of command line options.To set the library location and avoid having to specify this at every package install, simply create the R startup environment file .Renviron in your home area if it does not already exist, and add the following piece of code to it:R_LIBS_USER = "/home/username/Rpackages" 2. Setting the RepositoryEach time you install an R package from the R command line, you are asked which CRAN mirror, or server, R should use. To set the repository and avoid having to specify this during every package installation, create the R startup command file .Rprofile in your home directory and add the following R code to it:cat("Setting Seattle repository")r = getOption("repos") r["CRAN"] = "http://cran.fhcrc.org/"options(repos = r)rm(r) This code snippet sets the R package repository to the Seattle CRAN mirror at the start of each R session. 3. Installing R Packages for use with Oracle R EnterpriseEmbedded R execution with Oracle R Enterprise allows the use of CRAN or other third-party R packages in user-defined R functions executed on the Oracle Database server. The steps for installing and configuring packages for use with Oracle R Enterprise are the same as for open source R. The database-side R engine just needs to know where to find the R packages.The Oracle R Enterprise installation is performed by user oracle, which typically does not have write permission to the default site-wide library, /usr/lib64/R/library. On Linux and UNIX platforms, the Oracle R Enterprise Server installation provides the ORE script, which is executed from the operating system shell to install R packages and to start R. The ORE script is a wrapper for the default R script, a shell wrapper for the R executable. It can be used to start R, run batch scripts, and build or install R packages. Unlike the default R script, the ORE script installs packages to a location writable by user oracle and accessible by all ORE users - $ORACLE_HOME/R/library.To install a package on the database server so that it can be used by any R user and for use in embedded R execution, an Oracle DBA would typically download the package source from CRAN using wget. If the package depends on any packages that are not in the R distribution in use, download the sources for those packages, also.  For a single Oracle Database instance, replace the R script with ORE to install the packages in the same location as the Oracle R Enterprise packages. $ wget http://cran.r-project.org/src/contrib/arules_1.1-2.tar.gz$ ORE CMD INSTALL arules_1.1-2.tar.gzBehind the scenes, the ORE script performs the equivalent of setting R_LIBS_USER to the value of $ORACLE_HOME/R/library, and all R packages installed with the ORE script are installed to this location. For installing a package on multiple database servers, such as those in an Oracle Real Application Clusters (Oracle RAC) or a multinode Oracle Exadata Database Machine environment, use the ORE script in conjunction with the Exadata Distributed Command Line Interface (DCLI) utility.$ dcli -g nodes -l oracle ORE CMD INSTALL arules_1.1-1.tar.gz The DCLI -g flag designates a file containing a list of nodes to install on, and the -l flag specifies the user id to use when executing the commands. For more information on using DCLI with Oracle R Enterprise, see Chapter 5 in the Oracle R Enterprise Installation Guide.If you are using an Oracle R Enterprise client, install the package the same as any R package, bearing in mind that you must install the same version of the package on both the client and server machines to avoid incompatibilities. 4. CRAN Task ViewsCRAN also maintains a set of Task Views that identify packages associated with a particular task or methodology. Task Views are helpful in guiding users through the huge set of available R packages. They are actively maintained by volunteers who include detailed annotations for routines and packages. If you find one of the task views is a perfect match, you can install every package in that view using the ctv package - an R package for automating package installation. To use the ctv package to install a task view, first, install and load the ctv package.R> install.packages("ctv")R> library(ctv)Then query the names of the available task views and install the view you choose.R> available.views() R> install.views("TimeSeries") 5. Using and Managing R packages To use a package, start up R and load packages one at a time with the library command.Load the arules package in your R session. R> library(arules)Verify the version of arules installed.R> packageVersion("arules")[1] '1.1.2'Verify the version of arules installed on the database server using embedded R execution.R> ore.doEval(function() packageVersion("arules"))View the help file for the apropos function in the arules packageR> ?aproposOver time, your package repository will contain more and more packages, especially if you are using the system-wide repository where others are adding additional packages. It’s good to know the entire set of R packages accessible in your environment. To list all available packages in your local R session, use the installed.packages command:R> myLocalPackages <- row.names(installed.packages())R> myLocalPackagesTo access the list of available packages on the ORE database server from the ORE client, use the following embedded R syntax: R> myServerPackages <- ore.doEval(function() row.names(installed.packages()) R> myServerPackages 6. Troubleshooting Common ProblemsInstalling Older Versions of R packagesIf you immediately upgrade to the latest version of R, you will have no problem installing the most recent versions of R packages. However, if your version of R is older, some of the more recent package releases will not work and install.packages will generate a message such as: Warning message: In install.packages("arules") : package ‘arules’ is not availableThis is when you have to go to the Old sources link on the CRAN page for the arules package and determine which version is compatible with your version of R.Begin by determining what version of R you are using:$ R --versionOracle Distribution of R version 3.0.1 (--) -- "Good Sport" Copyright (C) The R Foundation for Statistical Computing Platform: x86_64-unknown-linux-gnu (64-bit)Given that R-3.0.1 was released May 16, 2013, any version of the arules package released after this date may work. Scanning the arules archive, we might try installing version 0.1.1-1, released in January of 2014:$ wget http://cran.r-project.org/src/contrib/Archive/arules/arules_1.1-1.tar.gz$ R CMD INSTALL arules_1.1-1.tar.gzFor use with ORE:$ ORE CMD INSTALL arules_1.1-1.tar.gzThe "package not available" error can also be thrown if the package you’re trying to install lives elsewhere, either another R package site, or it’s been removed from CRAN. A quick Google search usually leads to more information on the package’s location and status.Oracle R Enterprise is not in the R library pathOn Linux hosts, after installing the ORE server components, starting R, and attempting to load the ORE packages, you may receive the error:R> library(ORE)Error in library(ORE) : there is no package called ‘ORE’If you know the ORE packages have been installed and you receive this error, this is the result of not starting R with the ORE script. To resolve this problem, exit R and restart using the ORE script. After restarting R and ">running the command to load the ORE packages, you should not receive any errors.$ ORER> library(ORE)On Windows servers, the solution is to make the location of the ORE packages visible to R by adding them to the R library paths. To accomplish this, exit R, then add the following lines to the .Rprofile file. On Windows, the .Rprofile file is located in R\etc directory C:\Program Files\R\R-<version>\etc. Add the following lines:.libPaths("<path to $ORACLE_HOME>/R/library")The above line will tell R to include the R directory in the Oracle home as part of its search path. When you start R, the path above will be included, and future R package installations will also be saved to $ORACLE_HOME/R/library. This path should be writable by the user oracle, or the userid for the DBA tasked with installing R packages.Binary package compiled with different version of RBy default, R will install pre-compiled versions of packages if they are found. If the version of R under which the package was compiled does not match your installed version of R you will get an error message:Warning message: package ‘xxx’ was built under R version 3.0.0The solution is to download the package source and build it for your version of R.$ wget http://cran.r-project.org/src/contrib/Archive/arules/arules_1.1-1.tar.gz$ R CMD INSTALL arules_1.1-1.tar.gzFor use with ORE:$ ORE CMD INSTALL arules_1.1-1.tar.gzUnable to execute files in /tmp directoryBy default, R uses the /tmp directory to install packages. On security conscious machines, the /tmp directory is often marked as "noexec" in the /etc/fstab file. This means that no file under /tmp can ever be executed, and users who attempt to install R package will receive an error:ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual’The solution is to set the TMP and TMPDIR environment variables to a location which R will use as the compilation directory. For example:$ mkdir <some path>/tmp$ export TMPDIR= <some path>/tmp$ export TMP= <some path>/tmpThis error typically appears on Linux client machines and not database servers, as Oracle Database writes to the value of the TMP environment variable for several tasks, including holding temporary files during database installation. 7. Creating your own R packageCreating your own package and submitting to CRAN is for advanced users, but it is not difficult. The procedure to follow, along with details of R's package system, is detailed in the Writing R Extensions manual.

    Read the article

  • Autounattend.xml not being recognized in VirtualBox

    - by beagle
    I am working my way through the steps on this page to prepare an unattended installation of Windows 7 Enterprise x64 for purposes of a college assignment which simply requires the process to be carried out and documented. Both the "technician" and "reference" computers are virtual machines created in VirtualBox 4.3.12, as will be the destination computer. I seem to have successfully completed Step 1, building an Autounattend.xml answer file using Windows System Image Manager, in as far as the answer file validates successfully. The problem arises when I try to install Windows on the reference machine from the DVD image in conjunction with the Autounattend file on a USB drive. I have tried a couple of different USB devices, and the devices themselves seem to be recognized, but the answer file does not, as instead of taking the configuration settings from the file the user interface appears as in a manual installation. Has anyone come across this problem or a solution? The xml created by Windows SIM is below for reference in case the problem is with the file itself. <?xml version="1.0" encoding="utf-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="oobeSystem"> <component name="Microsoft-Windows-Deployment" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Reseal> <Mode>Audit</Mode> </Reseal> </component> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <OOBE> <HideEULAPage>true</HideEULAPage> <ProtectYourPC>3</ProtectYourPC> </OOBE> </component> </settings> <settings pass="windowsPE"> <component name="Microsoft-Windows-International-Core-WinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <SetupUILanguage> <UILanguage>en-IE</UILanguage> </SetupUILanguage> <InputLocale>en-IE</InputLocale> <SystemLocale>en-IE</SystemLocale> <UILanguage>en-IE</UILanguage> <UserLocale>en-IE</UserLocale> </component> <component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <DiskConfiguration> <Disk wcm:action="add"> <CreatePartitions> <CreatePartition wcm:action="add"> <Order>1</Order> <Size>300</Size> <Type>Primary</Type> </CreatePartition> <CreatePartition wcm:action="add"> <Order>2</Order> <Extend>true</Extend> <Type>Primary</Type> </CreatePartition> </CreatePartitions> <ModifyPartitions> <ModifyPartition wcm:action="add"> <Active>true</Active> <Format>NTFS</Format> <Label>System</Label> <Order>1</Order> <PartitionID>1</PartitionID> </ModifyPartition> <ModifyPartition wcm:action="add"> <Format>NTFS</Format> <Label>Windows</Label> <Order>2</Order> <PartitionID>2</PartitionID> </ModifyPartition> </ModifyPartitions> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> </Disk> <WillShowUI>OnError</WillShowUI> </DiskConfiguration> <ImageInstall> <OSImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>2</PartitionID> </InstallTo> <InstallToAvailablePartition>false</InstallToAvailablePartition> <WillShowUI>OnError</WillShowUI> </OSImage> </ImageInstall> <UserData> <ProductKey> <WillShowUI>OnError</WillShowUI> </ProductKey> <AcceptEula>true</AcceptEula> </UserData> </component> </settings> <settings pass="specialize"> <component name="Microsoft-Windows-IE-InternetExplorer" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Home_Page>http://www.example.com</Home_Page> </component> </settings> <cpi:offlineImage cpi:source="wim://technician/users/user/desktop/install.wim#Windows 7 ENTERPRISE" xmlns:cpi="urn:schemas-microsoft-com:cpi" />

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • How to configure LocalSessionFactoryBean to release connections after transaction end?

    - by peter
    I am testing an application (Spring 2.5, Hibernate 3.5.0 Beta, Atomikos 3.6.2, and Postgreql 8.4.2) with the configuration for the DAO listed below. The problem that I see is that the pool of 10 connections with the dataSource gets exhausted after the 10's transaction. I know 'hibernate.connection.release_mode' has no effect unless the session is obtained with openSession rather then using a contextual session. I am wandering if anyone has found a way to configure the LocalSessionFactoryBean to release connections after any transaction. Thank you Peter <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close"> <property name="uniqueResourceName"><value>XADBMS</value></property> <property name="xaDataSourceClassName"> <value>org.postgresql.xa.PGXADataSource</value> </property> <property name="xaProperties"> <props> <prop key="databaseName">${jdbc.name}</prop> <prop key="serverName">${jdbc.server}</prop> <prop key="portNumber">${jdbc.port}</prop> <prop key="user">${jdbc.username}</prop> <prop key="password">${jdbc.password}</prop> </props> </property> <property name="poolSize"><value>10</value></property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="mappingResources"> <list> <value>Abc.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">on</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.connection.isolation">3</prop> <prop key="hibernate.current_session_context_class">jta</prop> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.release_mode">auto</prop> <prop key="hibernate.transaction.auto_close_session">true</prop> </props> </property> </bean> <!-- Transaction definition here --> <bean id="userTransactionService" class="com.atomikos.icatch.config.UserTransactionServiceImp" init-method="init" destroy-method="shutdownForce"> <constructor-arg> <props> <prop key="com.atomikos.icatch.service"> com.atomikos.icatch.standalone.UserTransactionServiceFactory </prop> </props> </constructor-arg> </bean> <!-- Construct Atomikos UserTransactionManager, needed to configure Spring --> <bean id="AtomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close" depends-on="userTransactionService"> <property name="forceShutdown" value="false" /> </bean> <!-- Also use Atomikos UserTransactionImp, needed to configure Spring --> <bean id="AtomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp" depends-on="userTransactionService"> <property name="transactionTimeout" value="300" /> </bean> <!-- Configure the Spring framework to use JTA transactions from Atomikos --> <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" depends-on="userTransactionService"> <property name="transactionManager" ref="AtomikosTransactionManager" /> <property name="userTransaction" ref="AtomikosUserTransaction" /> </bean> <!-- the transactional advice (what 'happens'; see the <aop:advisor/> bean below) --> <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <!-- all methods starting with 'get' are read-only --> <tx:method name="get*" read-only="true" propagation="REQUIRED"/> <!-- other methods use the default transaction settings (see below) --> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice> <aop:config> <aop:advisor pointcut="execution(* *.*.AbcDao.*(..))" advice-ref="txAdvice"/> </aop:config> <!-- DAO objects --> <bean id="abcDao" class="test.dao.impl.HibernateAbcDao" scope="singleton"> <property name="sessionFactory" ref="sessionFactory"/> </bean>

    Read the article

  • How to configure hibernate-tools with maven to generate hibernate.cfg.xml, *.hbm.xml, POJOs and DAOs

    - by mmm
    Hi, can any one tell me how to force maven to precede mapping .hbm.xml files in the automatically generated hibernate.cfg.xml file with package path? My general idea is, I'd like to use hibernate-tools via maven to generate the persistence layer for my application. So, I need the hibernate.cfg.xml, then all my_table_names.hbm.xml and at the end the POJO's generated. Yet, the hbm2java goal won't work as I put *.hbm.xml files into the src/main/resources/package/path/ folder but hbm2cfgxml specifies the mapping files only by table name, i.e.: <mapping resource="MyTableName.hbm.xml" /> So the big question is: how can I configure hbm2cfgxml so that hibernate.cfg.xml looks like below: <mapping resource="package/path/MyTableName.hbm.xml" /> My pom.xml looks like this at the moment: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <id>hbm2cfgxml</id> <phase>generate-sources</phase> <goals> <goal>hbm2cfgxml</goal> </goals> <inherited>false</inherited> <configuration> <components> <component> <name>hbm2cfgxml</name> <implemetation>jdbcconfiguration</implementation> <outputDirectory>src/main/resources/</outputDirectory> </component> </components> <componentProperties> <packagename>package.path</packageName> <configurationFile>src/main/resources/hibernate.cfg.xml</configurationFile> </componentProperties> </configuration> </execution> </executions> </plugin> And then the second question: is there a way to tell maven to copy resources to the target folder before executing hbm2java? At the moment I'm using mvn clean resources:resources generate-sources for that, but there must be a better way. Thanks for any help. Update: @Pascal: Thank you for your help. The path to mappings works fine now, I don't know what was wrong before, though. Maybe there is some issue with writing to hibernate.cfg.xml while reading database config from it (though the file gets updated). I've deleted the file hibernate.cfg.xml, replaced it with database.properties and run the goals hbm2cfgxml and hbm2hbmxml. I also don't use the outputDirectory nor configurationfile in those goals anymore. As a result the files hibernate.cfg.xml and all *.hbm.xml are being generated into my target/hibernate3/generated-mappings/ folder, which is the default value. Then I updated the hbm2java goal with the following: <componentProperties> <packagename>package.name</packagename> <configurationfile>target/hibernate3/generated-mappings/hibernate.cfg.xml</configurationfile> </componentProperties> But then I get the following: [INFO] --- hibernate3-maven-plugin:2.2:hbm2java (hbm2java) @ project.persistence --- [INFO] using configuration task. [INFO] Configuration XML file loaded: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:17,484 INFO org.hibernate.cfg.Configuration - configuring from url: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:19,046 INFO org.hibernate.cfg.Configuration - Reading mappings from resource : package.name/Messages.hbm.xml [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java (hbm2java) on project project.persistence: Execution hbm2java of goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java failed: resource: package/name/Messages.hbm.xml not found How do I deal with that? Of course I could add: <outputDirectory>src/main/resources/package/name</outputDirectory> to the hbm2hbmxml goal, but I think this is not the best approach, or is it? Is there a way to keep all the generated code and resources away from the src/ folder? I assume, the goal of this approach is not to generate any sources into my src/main/java or /resources folder, but to keep the generated code in the target folder. As I generally agree with this point of view, I'd like to continue with that eventually executing hbm2dao and packaging the project to be used as a generated persistence layer component from the business layer. Is this also what you meant?

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Handling Trailing Delimiters in HL7 Messages

    - by Thomas Canter
    Applies to: BizTalk Server 2006 with the HL7 1.3 Accelerator Outline of the problem Trailing Delimiters are empty values at the end of an object in a HL7 ER7 formatted message. Examples: Empty Field NTE|P| NTE|P|| Empty component ORC|1|725^ Empty Subcomponent ORC|1|||||27& Empty repeat OBR|1||||||||027~ Trailing delimiters indicate the following object exists and is empty, which is quite different from null, null is an explicit value indicated by a pair of double quotes -> "". The BizTalk HL7 Accelerator by default does not allow trailing delimiters. There are three methods to allow trailing delimiters. NOTE: All Schemas always allow trailing delimiters in the MSH Segment Using party identifiers MSH3.1 – Receive/inbound processing, using this value as a party allows you to configure the system to allow inbound trailing delimiters. MSH5.1 – Send/outbound processing, using this value as a party allows you to configure the system to allow outbound trailing delimiters. Generally, if you allow inbound trailing delimiters, unless you are willing to programmatically remove all trailing delimiters, then you need to configure the send to allow trailing delimiters. Add the appropriate parties to the BizTalk Parties list from these two fields in your message stream. Open the BizTalk HL7 Configuration tool and for each party check the "Allow trailing delimiters (separators)" check box on the Validation tab. Disadvantage – Each MSH3.1 and MSH5.1 value must be represented in the parties list and configured. Advantage – granular control over system behavior for each inbound/outbound system. Using instance properties of a pipeline used in a send port or receive location. Open the BizTalk Server Administration console locate the send port or receive location that contains the BTAHL72XReceivePipeline or BTAHL72XSendPipeline pipeline. Open the properties To the right of the pipeline selected locate the […] ellipses button In the property list, locate the "TrailingDelimiterAllowed" property and set it to True. Advantage – All messages through a particular Send Port or Receive Location will allow trailing delimiters. Disadvantage – Must configure each Send Port or Receive Location. No granular control over which remote parties will send or receive messages with trailing delimiters. Using a custom pipeline that uses a pre-configured BTA HL7 Pipeline component. Use Visual Studio to construct a custom receive and send pipeline using the appropriate assembler or dissasembler. Set the component property to "TrailingDelimitersAllowed" to True Compile and deploy the custom pipeline Use the custom pipeline instead of the standard pipeline for all HL7 message processing Advantage – All messages using the custom pipeline will automatically allow trailing delimiters. Disadvantage – Requires custom coding and development to create and deploy the custom pipeline. No granular control over which remote parties will send or receive messages with trailing delimiters. What does a Trailing Delimiter do to the XML Schema? Allowing trailing delimiters does not have the impact often expected in the actual XML Schema.The Schema reproduces the message with no data loss.Thus, the message when represented in XML must contain the extra fields, in order to reproduce the outbound message.Thus, a trialing delimiter results in an empty XML field.Trailing Delmiters are not stripped from the inbound message. Example:<PID_21>44172</PID_21><PID_21>9257</PID_21> -> the original maximum number of repeats<PID_21></PID_21> -> The empty repeated field Allowing trailing delimiters not remove the trailing delimiters from the message, it simply suppresses the check that will cause the message to fail parse with trailing delimiters. When can you not fix the problem by enabling trailing delimiters Each object in a message must have a location in the target BTAHL7 schema for its content to reside.If you have more objects in the message than are contained at that location, then enabling trailing delimiters will not resolve the problem. The schema must be extended to accommodate the empty message content.Examples: Extra Field NTE|P||||Only 4 fields in NTE Segment, the 4th field exists, but is empty. Extra component PID|1|1523|47^^^^^^^Only 5 components in a CX data type, the 5th component exists, but is empty Extra subcomponent ORC|1|||||27&&Only 2 subcomponents in a CQ data type, the 3rd subcomponent is empty, but exists. Extra Repeat PID|1||||||||||||||||||||4419~5217~Only 2 repeats allowed for the field "Mother's identifier", the repeat is empty, but exists. In each of these cases, you must locate the failing object and extend the type to allow an additional object of that type. FieldAdd a field of ST to the end of the segment with a suitable name in the segments_nnn.xsd Component Create a new Custom CX data type (i.e. CX_XtraComp) in the datatypes_nnn.xsd and add a new component to the custom CX data type. Update the field in the segments_nnn.xsd file to use the custom data type instead of the standard datatype. Subcomponent Create a new Custom CQ data type that accepts an additional TS value at the end of the data type. Create a custom TQ data type that uses the new custom CQ data type as the first subcomponent. Modify the ORC segment to use the new CQ data type at ORC.7 instead of the standard CQ data type. RepeatModify the Field definition for PID.21 in the segments_nnn.xsd to allow more repeats in the field.

    Read the article

  • Kernel, dpkg, sudo and apt-get corrupted

    - by TECH4JESUS
    Here are some errors that I am getting: 1) A proper configuration for Firestarter was not found. If you are running Firestarter from the directory you built it in, run make install-data-local to install a configuration, or simply make install to install the whole program. Firestarter will now close. root@p:/# firestarter ** (firestarter:5890): WARNING **: The connection is closed (firestarter:5890): GnomeUI-WARNING **: While connecting to session manager: None of the authentication protocols specified are supported. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. ^C 2) Also I cannot apt-get install sudo root@p:/# apt-get install sudo Reading package lists... Done Building dependency tree Reading state information... Done sudo is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-rb-3.0 gir1.2-gstreamer-0.10 libntfs10 python-mako libdmapsharing-3.0-2 rhythmbox-data libx264-116 rhythmbox libiso9660-7 librhythmbox-core5 libvpx0 libmatroska4 gir1.2-gst-plugins-base-0.10 rhythmbox-mozilla rhythmbox-plugin-zeitgeist libattica0 libgpac0.4.5 python-markupsafe libmusicbrainz4c2a rhythmbox-plugin-cdrecorder rhythmbox-plugins libaudiofile0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 9 not fully installed or removed. Need to get 0 B/76.3 MB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y /bin/sh: 1: /usr/sbin/dpkg-preconfigure: not found (Reading database ... 495741 files and directories currently installed.) Preparing to replace linux-image-3.2.0-24-generic 3.2.0-24.39 (using .../linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Preparing to replace linux-image-3.2.0-25-generic 3.2.0-25.40 (using .../linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Recovering from apt-get upgrade gone wrong due to a full disk

    - by Peter
    I was performing an apt-get upgrade on an Ubuntu 12.04.5 LTS box that hadn't been updated in a little while and the upgrade failed due to 'No space left on device'. After a little while I worked out space meant inodes and I have freed some up but unfortunately things have been left something askew. I have tried manually installing the old versions of packages mentioned using dpkg -i but that doesn't help. I have tried apt-get upgrade and apt-get -f install to no avail. Results are below. Any ideas how to fix things up? FIXED: Installing the earlier versions again manually via dpkg -i and then apt-get -f install has done the trick. Not sure why this didn't work the first time. The packages in question are listed below but they will presumably vary. libssl1.0.0_1.0.1-4ubuntu5.14_i386.deb linux-headers-3.2.0-64-generic-pae_3.2.0-64.97_i386.deb linux-image-generic-pae_3.2.0.64.76_i386.deb linux-headers-3.2.0-64_3.2.0-64.97_all.deb linux-headers-generic-pae_3.2.0.64.76_i386.deb root@unlinked:/tmp# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. libssl-dev : Depends: libssl1.0.0 (= 1.0.1-4ubuntu5.14) but 1.0.1-4ubuntu5.17 is installed linux-generic-pae : Depends: linux-image-generic-pae (= 3.2.0.64.76) but 3.2.0.67.79 is installed Depends: linux-headers-generic-pae (= 3.2.0.64.76) but 3.2.0.67.79 is installed E: Unmet dependencies. Try using -f. root@unlinked:/tmp# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-43-generic-pae linux-headers-3.2.0-38-generic-pae linux-headers-3.2.0-41-generic-pae linux-headers-3.2.0-36-generic-pae linux-headers-3.2.0-63-generic-pae linux-headers-3.2.0-58-generic-pae linux-headers-3.2.0-60-generic-pae linux-headers-3.2.0-55-generic-pae linux-headers-3.2.0-40 linux-headers-3.2.0-41 linux-headers-3.2.0-36 linux-headers-3.2.0-37 linux-headers-3.2.0-43 linux-headers-3.2.0-38 linux-headers-3.2.0-44 linux-headers-3.2.0-39 linux-headers-3.2.0-45 linux-headers-3.2.0-51 linux-headers-3.2.0-52 linux-headers-3.2.0-53 linux-headers-3.2.0-48 linux-headers-3.2.0-54 linux-headers-3.2.0-60 linux-headers-3.2.0-55 linux-headers-3.2.0-61 linux-headers-3.2.0-56 linux-headers-3.2.0-57 linux-headers-3.2.0-63 linux-headers-3.2.0-58 linux-headers-3.2.0-59 linux-headers-3.2.0-52-generic-pae linux-headers-3.2.0-44-generic-pae linux-headers-3.2.0-39-generic-pae linux-headers-3.2.0-37-generic-pae linux-headers-3.2.0-59-generic-pae linux-headers-3.2.0-61-generic-pae linux-headers-3.2.0-56-generic-pae linux-headers-3.2.0-53-generic-pae linux-headers-3.2.0-48-generic-pae linux-headers-3.2.0-45-generic-pae linux-headers-3.2.0-40-generic-pae linux-headers-3.2.0-57-generic-pae linux-headers-3.2.0-54-generic-pae linux-headers-3.2.0-51-generic-pae Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libssl-dev linux-generic-pae The following packages will be upgraded: libssl-dev linux-generic-pae 2 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade. 2 not fully installed or removed. Need to get 0 B/1,427 kB of archives. After this operation, 1,024 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of libssl-dev: libssl-dev depends on libssl1.0.0 (= 1.0.1-4ubuntu5.14); however: Version of libssl1.0.0 on system is 1.0.1-4ubuntu5.17. dpkg: error processing libssl-dev (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.64.76); however: Version of linux-image-generic-pae on system is 3.2.0.67.79. linux-generic-pae depends on linux-headers-generic-pae (= 3.2.0.64.76); however: Version of linux-headers-generic-pae on system is 3.2.0.67.79. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: libssl-dev linux-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Ivy and Snapshots (Nexus)

    - by Uberpuppy
    Hey folks, I'm using ant, ivy and nexus repo manager to build and store my artifacts. I managed to get everything working: dependency resolution and publishing. Until I hit a problem... (of course!). I was publishing to a 'release' repo in nexus, which is locked to 'disable redeploy' (even if you change the setting to 'allow redeploy' (really lame UI there imo). You can imagine how pissed off I was getting when my changes weren't updating through the repo before I realised that this was happening. Anyway, I now have to switch everything to use a 'Snapshot' repo in nexus. Problem is that this messes up my publish. I've tried a variety of things, including extensive googling, and haven't got anywhere whatsoever. The error I get is a bad PUT request, error code 400. Can someone who has got this working please give me a pointer on what I'm missing. Many thanks, Alastair fyi, here's my config: Note that I have removed any attempts at getting snapshots to work as I didn't know what was actually (potentially) useful and what was complete guff. This is therefore the working release-only setup. Also, please note that I've added the XXX-API ivy.xml for info only. I can't even get the xxx-common to publish (and that doesn't even have dependencies). Ant task: <target name="publish" depends="init-publish"> <property name="project.generated.ivy.file" value="${project.artifact.dir}/ivy.xml"/> <property name="project.pom.file" value="${project.artifact.dir}/${project.handle}.pom"/> <echo message="Artifact dir: ${project.artifact.dir}"/> <ivy:deliver deliverpattern="${project.generated.ivy.file}" organisation="${project.organisation}" module="${project.artifact}" status="integration" revision="${project.revision}" pubrevision="${project.revision}" /> <ivy:resolve /> <ivy:makepom ivyfile="${project.generated.ivy.file}" pomfile="${project.pom.file}"/> <ivy:publish resolver="${ivy.omnicache.publisher}" module="${project.artifact}" organisation="${project.organisation}" revision="${project.revision}" pubrevision="${project.revision}" pubdate="now" overwrite="true" publishivy="true" status="integration" artifactspattern="${project.artifact.dir}/[artifact]-[revision](-[classifier]).[ext]" /> </target> Couple of ivy files to give an idea of internal dependencies: XXX-Common project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_common" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_common" type="jar" ext="jar"/> <artifact name="xxx_common" type="pom" ext="pom"/> </publications> <dependencies> </dependencies> </ivy-module> XXX-API project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_api" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_api" type="jar" ext="jar"/> <artifact name="xxx_api" type="pom" ext="pom"/> </publications> <dependencies> <dependency org="com.myorg.xxx" name="xxx_common" rev="1.0" transitive="true" /> </dependencies> </ivy-module> IVY Settings.xml: <ivysettings> <properties file="${ivy.project.dir}/project.properties" /> <settings defaultResolver="chain" defaultConflictManager="all" /> <credentials host="${ivy.credentials.host}" realm="Sonatype Nexus Repository Manager" username="${ivy.credentials.username}" passwd="${ivy.credentials.passwd}" /> <caches> <cache name="ivy.cache" basedir="${ivy.cache.dir}" /> </caches> <resolvers> <ibiblio name="xxx_publisher" m2compatible="true" root="${ivy.xxx.publish.url}" /> <chain name="chain"> <url name="xxx"> <ivy pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/ivy-[revision].xml" /> <artifact pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="xxx" m2compatible="true" root="${ivy.xxx.repo.url}"/> <ibiblio name="public" m2compatible="true" root="${ivy.master.repo.url}" /> <url name="com.springsource.repository.bundles.release"> <ivy pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <url name="com.springsource.repository.bundles.external"> <ivy pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> </chain> </resolvers> </ivysettings>

    Read the article

  • Maven struts2 modular archetype failing to generate !

    - by Xinus
    I am trying to generate struts 2 modular archetype using maven but always getting error as archetype not present here is a full output : C:\Users\Administrator>mvn archetype:generate [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus .velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Interactive mode [INFO] No archetype defined. Using maven-archetype-quickstart (org.apache.maven. archetypes:maven-archetype-quickstart:1.0) Choose archetype: 1: internal -> appfuse-basic-jsf (AppFuse archetype for creating a web applicati on with Hibernate, Spring and JSF) 2: internal -> appfuse-basic-spring (AppFuse archetype for creating a web applic ation with Hibernate, Spring and Spring MVC) 3: internal -> appfuse-basic-struts (AppFuse archetype for creating a web applic ation with Hibernate, Spring and Struts 2) 4: internal -> appfuse-basic-tapestry (AppFuse archetype for creating a web appl ication with Hibernate, Spring and Tapestry 4) 5: internal -> appfuse-core (AppFuse archetype for creating a jar application wi th Hibernate and Spring and XFire) 6: internal -> appfuse-modular-jsf (AppFuse archetype for creating a modular app lication with Hibernate, Spring and JSF) 7: internal -> appfuse-modular-spring (AppFuse archetype for creating a modular application with Hibernate, Spring and Spring MVC) 8: internal -> appfuse-modular-struts (AppFuse archetype for creating a modular application with Hibernate, Spring and Struts 2) 9: internal -> appfuse-modular-tapestry (AppFuse archetype for creating a modula r application with Hibernate, Spring and Tapestry 4) 10: internal -> maven-archetype-j2ee-simple (A simple J2EE Java application) 11: internal -> maven-archetype-marmalade-mojo (A Maven plugin development proje ct using marmalade) 12: internal -> maven-archetype-mojo (A Maven Java plugin development project) 13: internal -> maven-archetype-portlet (A simple portlet application) 14: internal -> maven-archetype-profiles () 15: internal -> maven-archetype-quickstart () 16: internal -> maven-archetype-site-simple (A simple site generation project) 17: internal -> maven-archetype-site (A more complex site project) 18: internal -> maven-archetype-webapp (A simple Java web application) 19: internal -> jini-service-archetype (Archetype for Jini service project creat ion) 20: internal -> softeu-archetype-seam (JSF+Facelets+Seam Archetype) 21: internal -> softeu-archetype-seam-simple (JSF+Facelets+Seam (no persistence) Archetype) 22: internal -> softeu-archetype-jsf (JSF+Facelets Archetype) 23: internal -> jpa-maven-archetype (JPA application) 24: internal -> spring-osgi-bundle-archetype (Spring-OSGi archetype) 25: internal -> confluence-plugin-archetype (Atlassian Confluence plugin archety pe) 26: internal -> jira-plugin-archetype (Atlassian JIRA plugin archetype) 27: internal -> maven-archetype-har (Hibernate Archive) 28: internal -> maven-archetype-sar (JBoss Service Archive) 29: internal -> wicket-archetype-quickstart (A simple Apache Wicket project) 30: internal -> scala-archetype-simple (A simple scala project) 31: internal -> lift-archetype-blank (A blank/empty liftweb project) 32: internal -> lift-archetype-basic (The basic (liftweb) project) 33: internal -> cocoon-22-archetype-block-plain ([http://cocoon.apache.org/2.2/m aven-plugins/]) 34: internal -> cocoon-22-archetype-block ([http://cocoon.apache.org/2.2/maven-p lugins/]) 35: internal -> cocoon-22-archetype-webapp ([http://cocoon.apache.org/2.2/maven- plugins/]) 36: internal -> myfaces-archetype-helloworld (A simple archetype using MyFaces) 37: internal -> myfaces-archetype-helloworld-facelets (A simple archetype using MyFaces and facelets) 38: internal -> myfaces-archetype-trinidad (A simple archetype using Myfaces and Trinidad) 39: internal -> myfaces-archetype-jsfcomponents (A simple archetype for create c ustom JSF components using MyFaces) 40: internal -> gmaven-archetype-basic (Groovy basic archetype) 41: internal -> gmaven-archetype-mojo (Groovy mojo archetype) Choose a number: (1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/2 4/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41) 15: : 8 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] The defined artifact is not an archetype [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Sat Mar 27 08:22:38 IST 2010 [INFO] Final Memory: 8M/21M [INFO] ------------------------------------------------------------------------ C:\Users\Administrator> What can be the problem ?

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • Error when running a GWTTestCase using maven gwt plugin

    - by adancu
    Hi, I've created a test which extends GWTTestCase but I'm getting this error: mvn integration-test gwt:test Running com.myproject.test.ui. GwtTestMyFirstTestCase Translatable source found in... [WARN] No source path entries; expect subsequent failures [ERROR] Unable to find type 'java.lang.Object' [ERROR] Hint: Check that your module inherits 'com.google.gwt.core.Core' either directly or indirectly (most often by inheriting module 'com.google.gwt.user.User') Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.1 sec <<< FAILURE! GwtTestMyFirstTestCase.java is in /src/test/java, while the GWT module is located in src/main/java. I assume this shouldn't be a problem. I've done everything required according to http://mojo.codehaus.org/gwt-maven-plugin/user-guide/testing.html and of course that my gwt module already has com.google.gwt.core.Core indirectly imported. http://maven.apache.org/maven-v4_0_0.xsd" 4.0.0 com.myproject main jar 0.0.1-SNAPSHOT Main Module <properties> <gwt.module>com.myproject.MainModule</gwt.module> </properties> <parent> <groupId>com.myproject</groupId> <artifactId>app</artifactId> <version>0.1.0-SNAPSHOT</version> </parent> <dependencies> <dependency> <groupId>com.myproject</groupId> <artifactId>app-commons</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-dev</artifactId> <version>${gwt.version}</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <configuration> <outputFile>../app/src/main/webapp/WEB-INF/main.tree</outputFile> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <executions> <execution> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <classesDirectory> ${project.build.directory}/${project.build.finalName}/${gwt.module} </classesDirectory> </configuration> </plugin> </plugins> </build> Here is the test case, located in /src/test/java/com/myproject/test/ui public class GwtTestMyFirstTestCase extends GWTTestCase { @Override public String getModuleName() { return "com.myproject.MainModule"; } public void testSomething() { } } Here is the gwt module I'm trying to test, located in src/main/java/com/myproject/MainModule.gwt.xml: <inherits name='com.myproject.Commons' /> <source path="site" /> <source path="com.myproject.test.ui" /> <set-property name="gwt.suppressNonStaticFinalFieldWarnings" value="true" /> <entry-point class='com.myproject.site.SiteModuleEntry' /> Can anyone give me a hint or two about what I'm doing wrong? Thanks in advance.

    Read the article

  • xml file save/read error (making a highscore system for XNA game)

    - by Eddy
    i get an error after i write player name to the file for second or third time (An unhandled exception of type 'System.InvalidOperationException' occurred in System.Xml.dll Additional information: There is an error in XML document (18, 17).) (in highscores load method In data = (HighScoreData)serializer.Deserialize(stream); it stops) the problem is that some how it adds additional "" at the end of my .dat file could anyone tell me how to fix this? the file before save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>neil</string> <string>shawn</string> <string>mark</string> <string>cindy</string> <string>sam</string> </PlayerName> <Score> <int>200</int> <int>180</int> <int>150</int> <int>100</int> <int>50</int> </Score> <Count>5</Count> </HighScoreData> the file after save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>Nick</string> <string>Nick</string> <string>neil</string> <string>shawn</string> <string>mark</string> </PlayerName> <Score> <int>210</int> <int>210</int> <int>200</int> <int>180</int> <int>150</int> </Score> <Count>5</Count> </HighScoreData>> the part of my code that does all of save load to xml is: DECLARATIONS PART [Serializable] public struct HighScoreData { public string[] PlayerName; public int[] Score; public int Count; public HighScoreData(int count) { PlayerName = new string[count]; Score = new int[count]; Count = count; } } IAsyncResult result = null; bool inputName; HighScoreData data; int Score = 0; public string NAME; public string HighScoresFilename = "highscores.dat"; Game1 constructor public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; Width = graphics.PreferredBackBufferWidth = 960; Height = graphics.PreferredBackBufferHeight =640; GamerServicesComponent GSC = new GamerServicesComponent(this); Components.Add(GSC); } Inicialize function (end of it) protected override void Initialize() { //other game code base.Initialize(); string fullpath =Path.Combine(HighScoresFilename); if (!File.Exists(fullpath)) { //If the file doesn't exist, make a fake one... // Create the data to save data = new HighScoreData(5); data.PlayerName[0] = "neil"; data.Score[0] = 200; data.PlayerName[1] = "shawn"; data.Score[1] = 180; data.PlayerName[2] = "mark"; data.Score[2] = 150; data.PlayerName[3] = "cindy"; data.Score[3] = 100; data.PlayerName[4] = "sam"; data.Score[4] = 50; SaveHighScores(data, HighScoresFilename); } } all methods for loading saving and output public static void SaveHighScores(HighScoreData data, string filename) { // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file, creating it if necessary FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate); try { // Convert the object to XML data and put it in the stream XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); serializer.Serialize(stream, data); } finally { // Close the file stream.Close(); } } /* Load highscores */ public static HighScoreData LoadHighScores(string filename) { HighScoreData data; // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream);//this is the line // where program gives an error } finally { // Close the file stream.Close(); } return (data); } /* Save player highscore when game ends */ private void SaveHighScore() { // Create the data to saved HighScoreData data = LoadHighScores(HighScoresFilename); int scoreIndex = -1; for (int i = 0; i < data.Count ; i++) { if (Score > data.Score[i]) { scoreIndex = i; break; } } if (scoreIndex > -1) { //New high score found ... do swaps for (int i = data.Count - 1; i > scoreIndex; i--) { data.PlayerName[i] = data.PlayerName[i - 1]; data.Score[i] = data.Score[i - 1]; } data.PlayerName[scoreIndex] = NAME; //Retrieve User Name Here data.Score[scoreIndex] = Score; // Retrieve score here SaveHighScores(data, HighScoresFilename); } } /* Iterate through data if highscore is called and make the string to be saved*/ public string makeHighScoreString() { // Create the data to save HighScoreData data2 = LoadHighScores(HighScoresFilename); // Create scoreBoardString string scoreBoardString = "Highscores:\n\n"; for (int i = 0; i<5;i++) { scoreBoardString = scoreBoardString + data2.PlayerName[i] + "-" + data2.Score[i] + "\n"; } return scoreBoardString; } when ill make this work i will start this code when i call game over (now i start it when i press some buttons, so i could test it faster) public void InputYourName() { if (result == null && !Guide.IsVisible) { string title = "Name"; string description = "Write your name in order to save your Score"; string defaultText = "Nick"; PlayerIndex playerIndex = new PlayerIndex(); result= Guide.BeginShowKeyboardInput(playerIndex, title, description, defaultText, null, null); // NAME = result.ToString(); } if (result != null && result.IsCompleted) { NAME = Guide.EndShowKeyboardInput(result); result = null; inputName = false; SaveHighScore(); } } this where i call output to the screen (ill call this in highscores meniu section when i am done with debugging) spriteBatch.DrawString(Font1, "" + makeHighScoreString(),new Vector2(500,200), Color.White); }

    Read the article

  • Common usecases and techniques when integrating a 3rd party application with Oracle Sales Cloud

    - by asantaga
    Over the last year or so I've see a lot of partners migrating and integrate their applications with Oracle Sales Cloud. Interestingly I'd say 60% of the partners use the same set of design patterns over and over again. Most of the time I see that they want to embed their application into Oracle Sales Cloud, within a tab usually, perhaps click on a link to their application (passing some piece of data + credentials) and then within their application update sales cloud again using webservices. Here are some examples of the different use-cases I've seen , and how partners are embedding their applications into Sales Cloud, NB : The following examples use the "Desktop" User Interface rather than the Newer "Simplified User Interface", I'll update the sample application soon but the integration patterns are precisely the same Use Case 1 :  Navigator "Link out" to third party application This is an example of where the developer has added a link to the global navigator and this links out to the 3rd Party Application. Typically one doesn't pass any contextual data with the exception of perhaps user credentials, or better still JWT Token. Techniques Used   Adding Link to Menu Item Using JWT Token in Sales Cloud Use Case 2 : Application Embedded within the Sales Cloud Dashboard Within the Oracle Sales Cloud application there is a tab called "Sales", within this tab its possible to embed a SubTab and embed a iFrame pointing to your application. To do this the developer simply needs to edit the page in customization mode, add the tab and then add the iFrame, simples! The developer can pass credentials/JWT Token and some other pieces of data but not object data (ie the current OpportunityID etc)  Techniques Used Adding a page to the dashboard  Using JWT Token in Sales Cloud  Use Case 3 : Embedding a Tab and Context Linking out from a Sales Cloud object to the 3rd party application In this usecase the developer embeds two components into Oracle Sales Cloud. The first is a SubTab showing summary data to the user (a quote in our case) and then secondly a hyperlink, (although it could be a button) which when clicked navigates the user to the 3rd party application. In this case the developer almost always passes context specific data (i.e. the opportunityId) and a security token (username password combo or JWT Token). The third party application usually takes the data, perhaps queries more data using the Sales Cloud SOAP/WebService interface and then displays the resulting mashup to the user for further processing. When the user has finished their work in the 3rd party application they normally navigate back to Oracle Sales Cloud using what's called a "DeepLink", ie taking them back to the object [opportunity in our case] they came from. This image visually shows a "Happy Path" a user may follow, and combines linking out to an application , webservice calls and deep linking back to Sales Cloud. Techniques Used Extending a SalesCloud application with a custom button Using JWT Token in Sales Cloud Extending Oracle Sales Cloud [Opportnity] with a custom tab exposing External Content Retrieving Data from Oracle Sales cloud using WebServices Coding some groovy script to generate the URLs required (Doc 1571200.1 on MyOracle Support) DeepLinking to specific Oracle Sales Cloud Pages (Doc 1516151.1 on My Oracle Support) Use-Case 4 :  Server Side processing/synchronization This usecase focuses on the Server Side processing of data, in this case synchronizing data. Here the 3rd party application is running on a "timer", e.g. cron or similar, and when triggered it queries data from Oracle Sales Cloud, then it queries data from the 3rd party application, determines the deltas and then inserts the data where required. Specifically here we are calling Oracle Sales Cloud using SOAP/WebServices and the 3rd party application is being communicated to using the REST API, for Oracle Sales Cloud one would use standard JAX-WS WebService calls and for REST one would use the JAX-RS api and perhap the Jackson api for managing JSON objects.. This is a very common use case and one which specifically lends itself to using the Oracle Java Cloud Service as the ideal application server where to host the mediator between the two applications.  Techniques Used Using JWT Token in Sales Cloud Integrating with the Oracle Java Cloud Service Retrieving Data from Oracle Sales cloud using WebServices General Resources The above is just a small set of techniques and use-cases which are used today. There are plenty of other sources of documentation and resources available on the internet but to get you started here are a few of my favourite places  Sales Cloud General Documentation Sales Cloud Customize Tab is useful for general customization of Sales Cloud Sales Cloud Integration Tab focuses on the 3rd party integration techniques  Official Oracle Fusion Developer Relations Blog Official Oracle Fusion Developer Relations YouTube Channel Enjoy integrating! 

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • Trying to install Team viewer on Ubuntu 12.04

    - by Teknikk
    I recently got Ubuntu installed on my server, I wanted to install TeamViewer so i could easy manage the virtual machines, However, I get errors when installing it from App store?, And I also get errors, but more detailed on the terminal. Error output: tek@tek-G53SW:~/Download$ sudo dpkg -i ipts teamviewer_linux_x64.deb dpkg: error processing ipts (--install): cannot access archive: No such file or directory (Reading database ... 142115 files and directories currently installed.) Preparing to replace teamviewer7 7.0.9360 (using teamviewer_linux_x64.deb) ... Unpacking replacement teamviewer7 ... dpkg: dependency problems prevent configuration of teamviewer7: teamviewer7 depends on libc6-i386 (>= 2.7); however: Package libc6-i386 is not installed. teamviewer7 depends on lib32asound2; however: Package lib32asound2 is not installed. teamviewer7 depends on lib32z1; however: Package lib32z1 is not installed. teamviewer7 depends on ia32-libs; however: Package ia32-libs is not installed. dpkg: error processing teamviewer7 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: ipts teamviewer7 I tried to install it manually, but with no luck, I heard some others has this problems. I am running Ubuntu 12.04 x64. Error @ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs : tek@tek-G53SW:~/Download$ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tek@tek-G53SW:~/Download$ More errors tek@tek-G53SW:~/Download$ sudo apt-get -f install [sudo] password for tek: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: teamviewer7 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 81.9 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 142441 files and directories currently installed.) Removing teamviewer7 ... tek@tek-G53SW:~/Download$ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done lib32z1 is already the newest version. libc6-i386 is already the newest version. lib32asound2 is already the newest version. Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. tek@tek-G53SW:~/Download$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: gstreamer0.10-plugins-good:i386 but it is not going to be installed Depends: gtk2-engines:i386 but it is not going to be installed Depends: gtk2-engines-murrine:i386 but it is not going to be installed Depends: gtk2-engines-pixbuf:i386 but it is not going to be installed Depends: gtk2-engines-oxygen:i386 but it is not going to be installed Depends: ibus-gtk:i386 but it is not going to be installed Depends: libcanberra-gtk-module:i386 but it is not going to be installed Depends: libcups2:i386 but it is not going to be installed Depends: libcupsimage2:i386 but it is not going to be installed Depends: libfontconfig1:i386 but it is not going to be installed Depends: libgail-common:i386 but it is not going to be installed Depends: libgphoto2-2:i386 but it is not going to be installed Depends: libgtk2.0-0:i386 but it is not going to be installed Depends: libnss3:i386 but it is not going to be installed Depends: libqt4-opengl:i386 but it is not going to be installed Depends: libqt4-qt3support:i386 but it is not going to be installed Depends: libqt4-scripttools:i386 but it is not going to be installed Depends: libqt4-svg:i386 but it is not going to be installed Depends: libqtgui4:i386 but it is not going to be installed Depends: libqtwebkit4:i386 but it is not going to be installed Depends: librsvg2-common:i386 but it is not going to be installed Depends: libsane:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. tek@tek-G53SW:~/Download$

    Read the article

  • Wifi hotspot disconnected after some time

    - by Rohit Bansal
    I am trying to use my Ubuntu system as Wifi Hotspot, but for some reason Hotspot get disconnected on its own. Searching for the solution, I found this help : Why is my ethernet connection connecting and disconnecting repeatedly? Reading through the above article I used the following command sudo killall dnsmasq as a result I manage to establish hotspot for around 5-10 sec before getting disconnected as against immediately.... Here's the system log (in case needed) tail -f /var/log/syslog : Apr 1 23:31:42 NetworkManager[901]: <info> Starting dnsmasq... Apr 1 23:31:42 NetworkManager[901]: <info> (wlan0): device state change: ip-config -> activated (reason 'none') [70 100 0] Apr 1 23:31:42 dnsmasq[4159]: started, version 2.57 cachesize 150 Apr 1 23:31:42 dnsmasq[4159]: compile time options: IPv6 GNU-getopt DBus I18N DHCP TFTP IDN Apr 1 23:31:42 dnsmasq-dhcp[4159]: DHCP, IP range 10.42.43.10 -- 10.42.43.100, lease time 1h Apr 1 23:31:42 dnsmasq[4159]: reading /etc/resolv.conf Apr 1 23:31:42 dnsmasq[4159]: using nameserver 220.226.6.104#53 Apr 1 23:31:42 dnsmasq[4159]: using nameserver 220.226.100.40#53 Apr 1 23:31:42 dnsmasq[4159]: cleared cache Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) successful, device activated. Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) complete. Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP4 Configure Get) complete. Apr 1 23:31:42 dbus[885]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Apr 1 23:31:42 dbus[885]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Connection established at this point....now disconnecting after 10 sec... Apr 1 23:31:52 ntpdate[4194]: adjust time server 91.189.94.4 offset -0.011589 sec Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): IP6 addrconf timed out or failed. Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) scheduled... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) started... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) started... Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --out-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --out-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --source 10.42.43.0/255.255.255.0 --in-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --destination 10.42.43.0/255.255.255.0 --out-interface wlan0 --match state --state ESTABLISHED,RELATED --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table nat --insert POSTROUTING --source 10.42.43.0/255.255.255.0 ! --destination 10.42.43.0/255.255.255.0 --jump MASQUERADE Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --out-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --out-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --source 10.42.43.0/255.255.255.0 --in-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --destination 10.42.43.0/255.255.255.0 --out-interface wlan0 --match state --state ESTABLISHED,RELATED --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table nat --insert POSTROUTING --source 10.42.43.0/255.255.255.0 ! --destination 10.42.43.0/255.255.255.0 --jump MASQUERADE Apr 1 23:32:01 NetworkManager[901]: <info> Starting dnsmasq... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) complete. Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) complete. Apr 1 23:32:01 NetworkManager[901]: <warn> dnsmasq died with signal 9 Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): device state change: activated -> failed (reason 'sharing-start-failed') [100 120 18] Apr 1 23:32:01 dnsmasq[4235]: started, version 2.57 cachesize 150 Apr 1 23:32:01 dnsmasq[4235]: compile time options: IPv6 GNU-getopt DBus I18N DHCP TFTP IDN Apr 1 23:32:01 dnsmasq-dhcp[4235]: DHCP, IP range 10.42.43.10 -- 10.42.43.100, lease time 1h Apr 1 23:32:01 NetworkManager[901]: <warn> Activation (wlan0) failed for access point (Reppify Ubuntu) Apr 1 23:32:01 dnsmasq[4235]: reading /etc/resolv.conf Apr 1 23:32:01 dnsmasq[4235]: using nameserver 220.226.6.104#53 Apr 1 23:32:01 dnsmasq[4235]: using nameserver 220.226.100.40#53 Apr 1 23:32:01 dnsmasq[4235]: cleared cache Apr 1 23:32:01 NetworkManager[901]: <warn> Activation (wlan0) failed. Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): deactivating device (reason 'none') [0] Apr 1 23:32:01 dbus[885]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Apr 1 23:32:01 dbus[885]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Apr 1 23:32:01 NetworkManager[901]: <error> [1333303321.565351] [nm-device-wifi.c:1815] nm_device_wifi_set_mode(): (wlan0): error setting mode 2

    Read the article

  • Sending HTML to Gmail always lands in Spam

    - by cartaysm
    I am having an issue with sending HTML emails to Gmail. I can send them to Yahoo, Hotmail, RR, AOL, etc. with no problem at all, but when I send them to Gmail I get kicked to spam. I have checked my IP with a lot of different list to make sure it is not listed anywhere, which it is not. spamhaus = is not listed in the DBL abuse.net = is not listed in the SBL abuse.net = is not listed in the PBL abuse.net = is not listed in the XBL spamcop = not listed in bl.spamcop.net host 24.172.204.xxx xxx.204.172.24.in-addr.arpa domain name pointer xxxevents.com. host xxxevents.com xxxevents.com has address 24.172.204.xxx xxxevents.com mail is handled by 10 mail.xxxevents.com. I am just trying to send a very VERY basic HTML message (listed below). I use an Ubuntu server, swiftmailer, multipart/alternative (HTML & plain), SPF = pass, and I am going to setup DKIM today to see if that fixes it (but I doubt it will)... For now I will only post the message I sent that gets kicked to spam and can provide any details needed. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>Triathlon</title></head> <body> <table cellpadding="0" cellspacing="0"> <tr> <td> <p>Thank you for attending our 4th annual Triathlon/Duathlon/5k at Hueston Woods State Park on August 12th. This event is held annually to raise research funding for Crohn's Disease, Ulcerative Colitis, and Muscular Dystrophy diseases.</p> </td> </tr> <tr> <td> <p>As you know the results and pictures have been posted on our home page at since Sunday 8/13/2012. Now we also have updated our Facebook page with those photos and you can start tagging yourself or downloading the pictures now! <br /> our page and tag yourself at </p> <p> test test </p> <p>Race day events is professionally managed by Speedy-Feet</p> </td> </tr> </table> </body> </html> Just plain text works great, I thought maybe wording was messing me up but not the case... I am almost done install opendkim so I will be able to rule that out very soon. Edit: Okay installed opendkim and I am getting passing results so I sent the html I posted above it went through just fine. So now when I start to add a few more lines I am getting kicked back to spam again. Here is updated html code: ` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>Triathlon</title></head> <body> <table cellpadding="0" cellspacing="0"> <tr> <td> <center><a href='http://xxxevents.com' target="_blank"> <font face="Verdana, Arial, Helvetica, sans-serif" color="#666666" size="2"> <img src="http://xxxevents.com/marketemailimages/xxxlogo.png" alt="xxx It Events | Raising funds for Crohns, Colitis, and Muscular Dystrophy" border="0" /> </font></a></center> </td> <tr> <td> <p>Thank you for attending our 4th annual Triathlon/Duathlon/5k at Hueston Woods State Park on August 12th. This event is held annually to raise research funding for Crohn's Disease, Ulcerative Colitis, and Muscular Dystrophy diseases.</p> </td> </tr> <tr> <td> <p>As you know the results and pictures have been posted on our home page at since Sunday 8/13/2012. Now we also have updated our Facebook page with those photos and you can start tagging yourself or downloading the pictures now! <br /> our page and tag yourself at </p> <p> test test </p> <p>Race day events is professionally managed by Speedy-Feet</p> </td> </tr> </table> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td valign="top"> <div align="center" style="font-family:Verdana, Arial, Helvetica, sans-serif; font-size:10px;"><br />PO Box xxx Maineville, OH 45039<br /> <a href="mailto:[email protected]">[email protected]</a> | <a href='http://xxxevents.com' target="_blank">xxxevents.com</a><br /> <br /> </div> </td> </tr> </table> </body> </html>`

    Read the article

  • How do I drag and drop between two listboxes in XUL?

    - by pc1oad1etter
    I am trying to implement drag and drop between two listboxes. I have a few problems 1) I am not detecting any drag events of any kind from the source list box/ I do not seem to be able to drag from it 2) I can drag from my desktop to the target listbox and I am able to detect 'dragenter' 'dragover' and 'dragexit' events. I am noticing that the event parameter is undefined in my 'dragenter' callback - is this a problem? 3) I cannot figure out how to complete the drag and drop operation. From https://developer.mozilla.org/En/DragDrop/Drag_Operations#Performing_... "If the mouse was released over an element that is a valid drop target, that is, one that cancelled the last dragenter or dragover event, then the drop will be successful, and a drop event will fire at the target. Otherwise, the drag operation is cancelled and no drop event is fired." This seems to be referring to a 'drop' event, though there is not one listed at https://developer.mozilla.org/en/XUL/Events . I can't seem to detect the end of the drag in order to call one of the example 'doDrop()' functions that I find on MDC. My example, so far: http://pastebin.mozilla.org/713676 <?xml version="1.0"?> <?xml-stylesheet href="chrome://global/skin/" type="text/css"?> <window xmlns="http://www.mozilla.org/keymaster/gatekeeper/ there.is.only.xul" onload="initialize();"> <vbox> <hbox> <vbox> <description>List1</description> <listbox id="source" draggable="true"> <listitem label="1"/> <listitem label="3"/> <listitem label="4"/> <listitem label="5"/> </listbox> </vbox> <vbox> <description>List2</description> <listbox id="target" ondragenter="onDragEnter();"> <listitem label="2"/> </listbox> </vbox> </hbox> </vbox> <script type="application/x-javascript"> <![CDATA[ function initialize(){ jsdump('adding events'); var origin = document.getElementById("source"); origin.addEventListener("drag", onDrag, false); origin.addEventListener("dragdrop", onDragDrop, false); origin.addEventListener("dragend", onDragEnd, false); origin.addEventListener("dragstart", onDragStart, false); var target = document.getElementById("target"); target.addEventListener("dragenter", onDragEnter, false); target.addEventListener("dragover", onDragOver, false); target.addEventListener("dragexit", onDragExit, false); target.addEventListener("drop", onDrop, false); target.addEventListener("drag", onDrag, false); target.addEventListener("dragdrop", onDragDrop, false); } function onDrag(){ jsdump('onDrag'); } function onDragDrop(){ jsdump('onDragDrop'); } function onDragStart(){ jsdump('onDragStart'); } function onDragEnd(){ jsdump('onDragEnd'); } function onDragEnter(event){ //debugger; if(event){ jsdump('onDragEnter event.preventDefault()'); event.preventDefault(); }else{ jsdump("event undefined in onDragEnter"); } } function onDragExit(){ jsdump('onDragExit'); } function onDragOver(event){ //debugger; if(event){ //jsdump('onDragOver event.preventDefault()'); event.preventDefault(); }else{ jsdump("event undefined in onDragOver"); } } function onDrop(event){ jsdump('onDrop'); var data = event.dataTransfer.getData("text/plain"); event.target.textContent = data; event.preventDefault(); } function jsdump(str) { Components.classes['[email protected]/consoleservice;1'] .getService(Components.interfaces.nsIConsoleService) .logStringMessage(str); } ]]> </script> </window>

    Read the article

  • Hibernate error: cannot resolve table

    - by Roman
    I'm trying to make work the example from hibernate reference. I've got simple table Pupil with id, name and age fields. I've created correct (as I think) java-class for it according to all java-beans rules. I've created configuration file - hibernate.cfg.xml, just like in the example from reference. I've created hibernate mapping for one class Pupil, and here is the error occured. <hibernate-mapping> <class name="Pupil" table="pupils"> ... </class> </hibernate-mapping> table="pupils" is red in my IDE and I see message "cannot resolve table pupils". I've also founded very strange note in reference which says that most users fail with the same problem trying to run the example. Ah.. I'm very angry with this example.. IMHO if authors know that there is such problem they should add some information about it. But, how should I fix it? I don't want to deal with Ant here and with other instruments used in example. I'm using MySql 5.0, but I think it doesn't matter. UPD: source code Pupil.java - my persistent class package domain; public class Pupil { private Integer id; private String name; private Integer age; protected Pupil () { } public Pupil (String name, int age) { this.age = age; this.name = name; } public Integer getId () { return id; } public void setId (Integer id) { this.id = id; } public String getName () { return name; } public void setName (String name) { this.name = name; } public Integer getAge () { return age; } public void setAge (Integer age) { this.age = age; } public String toString () { return "Pupil [ name = " + name + ", age = " + age + " ]"; } } Pupil.hbm.xml is mapping for this class <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="domain" > <class name="Pupil" table="pupils"> <id name="id"> <generator class="native" /> </id> <property name="name" not-null="true"/> <property name="age"/> </class> </hibernate-mapping> hibernate.cfg.xml - configuration for hibernate <hibernate-configuration> <session-factory> <!-- Database connection settings --> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://localhost/hbm_test</property> <property name="connection.username">root</property> <property name="connection.password">root</property> <property name="connection.pool_size">1</property> <property name="dialect">org.hibernate.dialect.MySQL5Dialect</property> <property name="current_session_context_class">thread</property> <property name="show_sql">true</property> <mapping resource="domain/Pupil.hbm.xml"/> </session-factory> </hibernate-configuration> HibernateUtils.java package utils; import org.hibernate.SessionFactory; import org.hibernate.HibernateException; import org.hibernate.cfg.Configuration; public class HibernateUtils { private static final SessionFactory sessionFactory; static { try { sessionFactory = new Configuration ().configure ().buildSessionFactory (); } catch (HibernateException he) { System.err.println (he); throw new ExceptionInInitializerError (he); } } public static SessionFactory getSessionFactory () { return sessionFactory; } } Runner.java - class for testing hibernate import org.hibernate.Session; import java.util.*; import utils.HibernateUtils; import domain.Pupil; public class Runner { public static void main (String[] args) { Session s = HibernateUtils.getSessionFactory ().getCurrentSession (); s.beginTransaction (); List pups = s.createQuery ("from Pupil").list (); for (Object obj : pups) { System.out.println (obj); } s.getTransaction ().commit (); HibernateUtils.getSessionFactory ().close (); } } My libs: antlr-2.7.6.jar, asm.jar, asm-attrs.jar, cglib-2.1.3.jar, commons-collections-2.1.1.jar, commons-logging-1.0.4.jar, dom4j-1.6.1.jar, hibernate3.jar, jta.jar, log4j-1.2.11.jar, mysql-connector-java-5.1.7-bin.jar Compile error: cannot resolve table pupils

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >