Search Results

Search found 359 results on 15 pages for 'matte gary'.

Page 10/15 | < Previous Page | 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • C signal parent process from child

    - by Gary
    I'm trying to solve a problem I've got where a child process runs execvp() and needs to let the parent know if it returns. So, after the execvp() returns (because there's been an error), how can I tell the parent that this particular event has happened so it can handle it. There's one method of writing a string of text through the pipe I'm using and then reading that from the parent.. but it seems a bit sloppy. Is there a better way? Thanks!

    Read the article

  • How can I use a compound condition in a join in Linq?

    - by Gary McGill
    Let's say I have a Customer table which has a PrimaryContactId field and a SecondaryContactId field. Both of these are foreign keys that reference the Contact table. For any given customer, either one or two contacts may be stored. In other words, PrimaryContactId can never be NULL, but SecondaryContactId can be NULL. If I drop my Customer and Contact tables onto the "Linq to SQL Classes" design surface, the class builder will spot the two FK relationships from the Customer table to the Contact table, and so the generated Customer class will have a Contact field and a Contact1 field (which I can rename to PrimaryContact and SecondaryContact to avoid confusion). Now suppose that I want to get details of all the contacts for a given set of customers. If there was always exactly one contact then I could write something like: from customer in customers join contact in contacts on customer.PrimaryContactId equals contact.id select ... ...which would be translated into something like: SELECT ... FROM Customer INNER JOIN Contact ON Customer.FirstSalesPersonId = Contact.id But, because I want to join on both the contact fields, I want the SQL to look something like: SELECT ... FROM Customer INNER JOIN Contact ON Customer.FirstSalesPersonId = Contact.id OR Customer.SecondSalesPersonId = Contact.id How can I write a Linq expression to do that?

    Read the article

  • Execute another program in multi-threaded program

    - by Gary
    Hi, Just wondering how if it's possible to execute another program in a thread and send information to/get information from it. Essentially the same concept as with a child process and using pipes to communicate - however I don't want to use fork. I can't seem to find whether it's possible to do this, any help would be appreciated. Thanks

    Read the article

  • ASP.NET MVC / Linq-to-SQL classes: Can I get it to infer readable display names?

    - by Gary McGill
    If I have a table Orders with fields CustomerID, OrderID and OrderDate, then the "Linq-to-SQL classes" generated class will be called Orders, with members called CustomerID, OrderID and OrderDate. So far so good. However, if I then do Html.LabelFor(m => m.OrderDate) then the generated text will be "OrderDate" instead of "Order Date". I tried using Order_Date as the field name, but that didn't work. Is there any way to get it to infer a better display name? [I know that I can use data annotations to specify the display name explicitly, but I really don't want to do that for all my classes/members - I just want it to work by convention.]

    Read the article

  • Can you explicitly set a structure layout/alignment in C++ as you can in C#?

    - by Gary Willoughby
    In C# you have nice alignment attributes such as this: [StructLayout(LayoutKind.Explicit)] public struct Message { [FieldOffset(0)] public int a; [FieldOffset(4)] public short b; [FieldOffset(6)] public int c; [FieldOffset(22)] //Leave some empty space just for the heck of it. public DateTime dt; } Which gives you fine control on how you need your structure to be layed out in memory. Is there such a thing in standard C++?

    Read the article

  • How to modify posted form data within controller action before sending to view?

    - by Gary
    I want to render the same view after a successful action (rather than use RedirectToAction), but I need to modify the model data that is rendered to that view. The following is a contrived example that demonstrates two methods that that do not work: [AcceptVerbs("POST")] public ActionResult EditProduct(int id, [Bind(Include="UnitPrice, ProductName")]Product product) { NORTHWNDEntities entities = new NORTHWNDEntities(); if (ModelState.IsValid) { var dbProduct = entities.ProductSet.First(p => p.ProductID == id); dbProduct.ProductName = product.ProductName; dbProduct.UnitPrice = product.UnitPrice; entities.SaveChanges(); } /* Neither of these work */ product.ProductName = "This has no effect"; ViewData["ProductName"] = "This has no effect either"; return View(product); } Does anyone know what the correct method is for accomplishing this?

    Read the article

  • Cannot move ckeditor 4 selection to beginning

    - by Gary Hillerson
    I've been exploring numerous solutions in stackoverflow and elsewhere to try to get ckeditor 4 to scroll to the top after I've programmatically added a number of pages to it, using CKEditor's InsertHtml function. After adding my html, which looks fine, I want to position the cursor at the beginning. Here's one of a variety of things I've tried without success: function MoveCaretToStart(myEditor) { var range = new CKEDITOR.dom.range( editor.document ); range.selectNodeContents(editor.document.getBody()); range.moveToElementEditStart(range.root); // also tried range.collapse(true); range.select(); } ... MoveCaretToStart(CKEDITOR.instances['myEditor']); // which already has contents in it This doesn't throw any errors, but also doesn't move the cursor position (it remains at the end of the doc). I thought this one would be easy, but it sure hasn't been. Any help appreciated.

    Read the article

  • PHP - Open or copy a file when knowing only part of its name?

    - by Gary Willoughby
    I have a huge repository of files that are ordered by numbered folders. In each folder is a file which starts with a unique number then an unknown string of characters. Given the unique number how can i open or copy this file? for example: I have been given the number '7656875' and nothing more. I need to interact with a file called '\server\7656800\7656875 foobar 2x4'. how can i achieve this using PHP?

    Read the article

  • C# - How to implement multiple comparers for an IComparable<T> class?

    - by Gary Willoughby
    I have a class that implements IComparable. public class MyClass : IComparable<MyClass> { public int CompareTo(MyClass c) { return this.whatever.CompareTo(c.whatever); } etc.. } I then can call the sort method of a generic list of my class List<MyClass> c = new List<MyClass>(); //Add stuff, etc. c.Sort(); and have the list sorted according to my comparer. How do i specify further comparers to sort my collection different ways according to the other properties of MyClass in order to let users sort my collection in a number of different ways?

    Read the article

  • How am i overriding this C++ inherited member function without the virtual keyword being used?

    - by Gary Willoughby
    I have a small program to demonstrate simple inheritance. I am defining a Dog class which is derived from Mammal. Both classes share a simple member function called ToString(). How is Dog overriding the implementation in the Mammal class, when i'm not using the virtual keyword? (Do i even need to use the virtual keyword to override member functions?) mammal.h #ifndef MAMMAL_H_INCLUDED #define MAMMAL_H_INCLUDED #include <string> class Mammal { public: std::string ToString(); }; #endif // MAMMAL_H_INCLUDED mammal.cpp #include <string> #include "mammal.h" std::string Mammal::ToString() { return "I am a Mammal!"; } dog.h #ifndef DOG_H_INCLUDED #define DOG_H_INCLUDED #include <string> #include "mammal.h" class Dog : public Mammal { public: std::string ToString(); }; #endif // DOG_H_INCLUDED dog.cpp #include <string> #include "dog.h" std::string Dog::ToString() { return "I am a Dog!"; } main.cpp #include <iostream> #include "dog.h" using namespace std; int main() { Dog d; std::cout << d.ToString() << std::endl; return 0; } output I am a Dog! I'm using MingW on Windows via Code::Blocks.

    Read the article

  • How can I remove gradients from Elementary theme?

    - by John
    I really don't like the gradients in the Elementary theme and I was wondering if there is a way to remove them from applications like Nautilus-Elementary, Postler, Dexter, etc. I've tried commenting out the Apps/[Application].rc in /usr/share/themes/elementary/gtk-2.0/gtkrc but it doesn't work. It still leaves the gradients in their place. I'm a big fan of the other controls in the theme: the scroll bar, the way it borders gedit and the buttons, and I'd like to keep these features, but I don't like the way it styles its windows. EDIT: The gradients I'm talking about are the ones at the top of the window. Some examples: Nautilus-Elementary: Postler: Rhythmbbox: Transmission: I'd like to create a sort of matte look, similar to this, which was done using Orta: Nautilus-Elementary: Postler: Rhythmbox: Transmission: I'd like a flat color, preferably without the line separating the top part of the application with the bottom.

    Read the article

  • WUXGA revisited

    - by John Paul Cook
    I previously blogged about my search for a 17” 1920x1200 laptop. The only one I could find was a 17” MacBook Pro, which has been an excellent machine for running Windows and SQL Server. It is no longer made. Apple has a few refurbished ones available. Just be sure to get a matte display if you buy one. If you want WUXGA resolution or better in a laptop, your only off the shelf option is now the 15” MacBook Pro with the Retina display, which is 2880x1800. This exceeds the resolution of my 30” 2560x1600...(read more)

    Read the article

  • The best DPI value to let you work nicely [closed]

    - by user827992
    I'm probably about to buy a laptop, unfortunely they all have glare screen, even the "premium" device, the actual offer just differentiate on 2-3 variations about DPI and display resolution; considering that i would like a 13" laptop, what can be the best resolution? I was looking for a 4:3 but this days they are all cheap-made so i do not think that something expensive to produce like a 4:3 is on the market. on the 13" laptops i see that basically there are available 3 kinds of displays: 1366x768 ( a 16:9 ratio ) 1400x900 ( a 16:10 ratio ) 1600x900 ( a 16:9 ratio ) Honestly i'm asking for an advice because i do not like this things, not even one of them, but this is the market today, i was looking for some old-style laptop with good plastics, a 4:3 ratio or 5:4 or something like that and a true matte finish with an higher resolution compared to what you could find in the old laptops. Since programming involves the presence of many text character on the screen it's a good thing to choose the one with the highest DPI/PPI ?

    Read the article

  • how to install ubuntu on imac

    - by leviathan
    i need help! im still new to this. i really need help, i have an imac.. (it's corrupted. when i try to open it, it shows a question mark alternating with a smiley) Model Family: iMac G5 Processor: 1.8GHz G5 (PowerPC 970fx) Manufacturer: Motorola of CPUs: 1 Size: 17" Finish: Matte Resolution: 1440x900 Backlight: CCFL Base Memory: 256MB PC3200 DIMM Max Memory: 2GB Memory Slots: 2 Brand: Apple Original OS: Mac OS X 10.3.5 i think it's vintage. haha. but i want to fix it. first, i think the hard drive is broken (pretty much corrupted) but i tried to install a xubuntu os on it because i don't have any other disc. but i think xubuntu is not compatible because i can't seem to get pass through the partitioning part. could some one please help..

    Read the article

  • convert following string into array using explode()

    - by Deepali
    HI, I have a string: NEW ALPINESTAR?S SMX PLUS WHITE MOTORCYCLE BOOTS! 44/9.5$349.95 Time Left 1h 2m NEW AGV BLADE FLAT MATTE WHITE LARGE/LG HELMET$75.53Time Left 1h 2m I want result in array like this: Productname Price time NEW ALPINESTAR?S SMX PLUS WHITE MOTORCYCLE BOOTS! 44/9.5 $349.95 Time Left 1h 2m

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 4-10, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook Page for the week of November 4-10, 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (the very prolific A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. Exploring Lambda Expressions for the Java Language and the JVM | Java Magazine In the latest //Java/Architect column in Java Magazine, Ben Evans, Martijn Verburg, and Trisha Gee explain how, "although Lambda expressions might seem unfamiliar to begin with, they're quite easy to pick up, and mastering them will be vital for writing applications that can take full advantage of modern multicore CPUs." SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger ADF Mobile Custom Javasciprt – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article...) Converting SSL certificate generated by a 3rd party to an Oracle Wallet | Paulo Albuquerque Oracle Fusion Middleware A-Team member Paulo Albuquerque shares "a workaround to get your private key, certificate and CA trusted certificates chain into Oracle Wallet." How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Updated Business Activity Monitoring (BAM) Class | Gary Barg Oracle SOA Team blogger Gary Barg has news for those interested in a skills upgrade. This updated Oracle University course "explains how to use Oracle BAM to monitor enterprise business activities across an enterprise in real time. You can measure your key performance indicators (KPIs), determine whether you are meeting service-level agreements (SLAs), and take corrective action in real time." Thought for the Day "For every complex problem, there is a solution that is simple, neat, and wrong." — H. L. Mencken (September 12, 1880 – January 29, 1956) Source: SoftwareQuotes.com

    Read the article

  • Software firewall used in network

    - by user45019
    Hi, I have a medium sized organization with users between 300-500 users. I am looking for software firewall for this type orgnization. Which type of software do you guys prefer, am not looking for hardware firewall...Can u suggest me some names of software firewall for this kind of organization. thanks, Gary

    Read the article

  • SQL SERVER – Check the Isolation Level with DBCC useroptions

    - by pinaldave
    In recent consultancy project coordinator asked me – “can you tell me what is the isolation level for this database?” I have worked with different isolation levels but have not ever queried database for the same. I quickly looked up bookonline and found out the DBCC command which can give me the same details. You can run the DBCC UserOptions command on any database to get few details about dateformat, datefirst as well isolation level. DBCC useroptions Set Option                  Value --------------------------- -------------- textsize                    2147483647 language                    us_english dateformat                  mdy datefirst                   7 lock_timeout                -1 quoted_identifier           SET arithabort                  SET ansi_null_dflt_on           SET ansi_warnings               SET ansi_padding                SET ansi_nulls                  SET concat_null_yields_null     SET isolation level             read committed I thought this was very handy script, which I have not used earlier. Thanks Gary for asking right question. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • Great Surprise &ndash; MSDN Ultimate

    - by MarkPearl
    So, I attended the Microsoft Community Evening. The attendance was better than I was expecting for December and we had our first Programming Languages Meeting where Gary did a great presentation on an intro to Ruby. The best surprize of the evening happened when I was about to leave, Robert MacLean asked me how we did our MS licensing – the fact being that we were about to reach the end of our empower license with Microsoft and that I had no idea how we were going to afford upgrading it early next year. Well, out comes a Microsoft Visual Studio Ultimate with MSDN 12 month subscription. An absolute awesome gift – thanks Robert! Best gift ever!

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Convert png sequence to x264 with ffmpeg

    - by Thucydides411
    I am trying to convert a series of pngs into an mp4 video. I am using ffmpeg, and want to encode the video with the x264 codec. Using the command ffmpeg -y -r 30 -b 1800k -i _tmp%04d.png -vcodec libx264 out.mp4 I get the following warning message Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' My understanding is that there is an alpha channel in the pngs, which the x264 encoder cannot handle. Is there a way to get around this problem? Is there, for example, a way to get the encoder to ignore the alpha channel (my pngs don't actually have any transparent elements)? I'm aware that I could batch convert the pngs beforehand to strip the alpha channel, but the sequence of images is produced by another program, and having to preprocess the images each time I make a video would be less than optimal. Edit: After stripping the alpha channel from each frame using the command convert in.png -background white -flatten +matte out.png ffmpeg gives the warning message Incompatible pixel format 'pal8' for codec 'libx264', auto-selecting format 'yuv420p' so still no dice.

    Read the article

  • Editor's Notebook: Of Slobber Pots and Flux Capacitors

    - by user462779
    Just wrapping up the contents of the November 2012 issue of Profit... I found this snippet of an interview I did with Team Oracle mechanics Clyde Greene and Chad Colberg when I was in Gary, IN this summer working on a photo shoot about Team Oracle for the current issue. We were standing around in a hangar as the Team prepared for the Chicago Air and Water Show, chatting about the engineering and design of the Oracle Challenger III aerobatic plane. Pick up a copy of Profit's November 2012 and read what Team Oracle pilot Sean D. Tucker has to say about the Oracle Challenger III and get a closer look at the plane. I'll drop a link into this blog entry as soon as the story is available. Your editor, greasy and stooped after a red eye flight, talks with Sean D. Tucker about stunt flying.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15  | Next Page >