Search Results

Search found 68407 results on 2737 pages for 'text files'.

Page 66/2737 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Is there a difference between plain text emails, and multipart emails with only plain text?

    - by Brian Armstrong
    I'm using Rails to send emails and I just want to send a plain text email (there is no corresponding HTML part). I've noticed that if I just have one file named email.text.plain.erb it actually generates a multipart email with one part (the plain text part) like this: Content-Type: multipart/alternative; boundary=mimepart_4c04a2d34c4bb_690a4e56b0362 --mimepart_4c04a2d34c4bb_690a4e56b0362 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: Quoted-printable Content-Disposition: inline text of the email here... --mimepart_4c04a2d34c4bb_690a4e56b0362-- But if I take out the text.plain part and name it email.erb ActionMailer generates a regular plain text email without multipart like this: Content-Type: text/plain; charset=utf-8 text of the email here... Both work fine most of the time (so this is kind of nitpicky), but I guess my question is whether the second one is more correct. My goal here is just to make sure deliverability is as high as possible across a wide variety of devices and email clients. I've read that plain text emails can have slightly better deliverability rates than html and was just curious if throwing in this multipart (even if it only contained a plain text part) might throw off some of the dumber email clients. Thanks for your help!

    Read the article

  • error insert text in papervision typography class

    - by safeDomain
    hi evryone , i am encounter with a small problem i want to make a 3d rtl text animation with papervision this code generet a problem to this : [Fault] exception, information=TypeError: Error #1009: Cannot access a property or method of a null object reference. but when using a english text this error dont genereta my code : package { import flash.display.Sprite; import flash.events.Event; import org.papervision3d.scenes.Scene3D import org.papervision3d.view.Viewport3D import org.papervision3d.cameras.Camera3D import org.papervision3d.render.BasicRenderEngine import org.papervision3d.typography.Font3D import org.papervision3d.typography.fonts.HelveticaBold import org.papervision3d.typography.Text3D import org.papervision3d.materials.special.Letter3DMaterial import flash.text.engine.FontDescription import flash.text.engine.ElementFormat import flash.text.engine.TextElement import flash.text.engine.TextBlock import flash.text.engine.TextLine /** * ... * @author vahid */ public class Main extends Sprite { private var fd:FontDescription private var ef:ElementFormat private var te:TextElement protected var st:String; private var scene:Scene3D private var view:Viewport3D private var camera:Camera3D private var render:BasicRenderEngine private var vpWidth:Number = stage.stageWidth; private var vpHeight:Number = stage.stageHeight; private var text3d:Text3D private var font3d:Font3D //private var font:HelveticaBold private var textMaterial:Letter3DMaterial private var text:String public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // rtl block fd = new FontDescription () ef = new ElementFormat (fd) te = new TextElement ("?????? ?????? ???? ?????? ?? papervision", ef) text = te.text //3d block scene = new Scene3D () view = new Viewport3D (vpWidth,vpHeight,true,true,false,false) camera = new Camera3D () render = new BasicRenderEngine() addChild (view) this.addEventListener (Event.ENTER_FRAME , renderThis) textMaterial = new Letter3DMaterial(0xFF0000,1) font3d = new HelveticaBold() text3d = new Text3D (text, font3d, textMaterial) scene.addChild (text3d) } protected function renderThis(e:Event):void { text3d.rotationY +=5 render.renderScene(scene,camera,view) } } } i am using flashdevelop. please help me thank's

    Read the article

  • After Adding "readonly" attribute on text box not able to remove it in one event

    - by Sreedhar K
    Steps to repro USE Internet Explorer Check unlimited check box Click on text box (It will remove tick/check from check box) Try to enter text in text box We cannot enter in the text box 4. Click again on the text box. Now we will be able to enter text in the text box We tried by 1. Making attribute readOnly to flase i.e. $('#myinput').attr('readOnly', false); 2. Calling $('#myinput').click(); Below is the HTML code <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Make input read only</title> <script type="text/javascript" src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.4.min.js"></script> </head> <body> <input id="myinput" type="text" /> <input id="mycheck" type="checkbox" /> <script type="text/javascript"> /*oncheck box click*/ $('#mycheck').click(function () { if ($(this).attr('checked')) { $('#myinput').attr('readOnly', 'readOnly'); } else { $('#myinput').removeAttr('readOnly'); /* also tried * $('#myinput').attr('readOnly', false); * $('#myinput').attr('readOnly', ''); */ } }); /*on text box click*/ $('#myinput').click(function () { $('#mycheck').removeAttr('checked'); $('#myinput').removeAttr('readOnly'); /* also tried * $('#myinput').attr('readOnly', false); * $('#myinput').attr('readOnly', ''); */ }); </script> </body> </html> Live copy

    Read the article

  • How to start matching and saving matched from exact point in a text

    - by yuliya
    I have a text and I write a parser for it using regular expressions and perl. I can match what I need with two empty lines (I use regexp), because there is a pattern that allows recognize blocks of text after two empty lines. But the problem is that the whole text has Introduction part and some text in the end I do not need. Here is a code which matches text when it finds two empty lines #!/usr/bin/perl use strict; use warnings; my $file = 'first'; open(my $fh, '<', $file); my $empty = 0; my $block_num = 1; open(OUT, '>', $block_num . '.txt'); while (my $line = <$fh>) { chomp ($line); if ($line =~ /^\s*$/) { $empty++; } elsif ($empty == 2) { close(OUT); open(OUT, '>', ++$block_num . '.txt'); $empty = 0; } else { $empty = 0;} print OUT "$line\n"; } close(OUT); This is example of the text I need (it's really small :)) this is file example I think that I need to iterate over the text till the moment it will find the word LOREM IPSUM with regexps this kind "/^LOREM IPSUM/", because it is the point from which needed text starts(and save the text in one file when i reach the word). And I need to finish iterating over the text when INDEX word is fount or save the text in separate file. How could I implement it. Should I use next function to proceed with lines or what? BR, Yuliya

    Read the article

  • c# warn if text box is empty or contains a non-whole number

    - by Jamaul Smith
    In my specific case, I need the value in propertyPriceTextBox to be numeric only, and a whole number. A value also has to be entered, and I can just Messagebox.Show() a warning and that's all I'd need to do. This is what I have so far. private void computeButton_Click(object sender, EventArgs e) { decimal propertyPrice; if ((decimal.TryParse(propertyPriceTextBox.Text, out propertyPrice))) decimal.Parse(propertyPriceTextBox.Text); { if (residentialRadioButton.Checked == true) commisionLabel.Text = (residentialCom * propertyPrice).ToString("c"); if (commercialRadioButton.Checked == true) commisionLabel.Text = (commercialCom * propertyPrice).ToString("c"); if (hillsRadioButton.Checked == true) countySalesTaxTextBox.Text = ( hilssTax * propertyPrice).ToString("c"); if (pascoRadioButton.Checked == true) countySalesTaxTextBox.Text = (pascoTax * propertyPrice).ToString("c"); if (polkRadioButton.Checked == true) countySalesTaxTextBox.Text = (polkTax * propertyPrice).ToString("c"); decimal result; result = (countySalesTaxTextBox.Text + stateSalesTaxTextBox.Text + propertyPriceTextBox.Text + comissionTextBox.Text).ToString("c"); } else (.) MessageBox.Show("Property Price must be a whole number."); }

    Read the article

  • Is C# development effectively inseparable from the IDE you use?

    - by Ghopper21
    I'm a Python programmer learning C# who is trying to stop worrying and just love C# for what it is, rather than constantly comparing it back to Python. I'm really get caught up on one point: the lack of explicitness about where things are defined, as detailed in this Stack Overflow question. In short: in C#, using foo doesn't tell you what names from foo are being made available, which is analogous to from foo import * in Python -- a form that is discouraged within Python coding culture for being implicit rather than the more explicit approach of from foo import bar. I was rather struck by the Stack Overflow answers to this point from C# programmers, which was that in practice this lack of explicitness doesn't really matter because in your IDE (presumably Visual Studio) you can just hover over a name and be told by the system where the name is coming from. E.g.: Now, in theory I realise this means when you're looking with a text editor, you can't tell where the types come from in C#... but in practice, I don't find that to be a problem. How often are you actually looking at code and can't use Visual Studio? This is revelatory to me. Many Python programmers prefer a text editor approach to coding, using something like Sublime Text 2 or vim, where it's all about the code, plus command line tools and direct access and manipulation of folders and files. The idea of being dependent on an IDE to understand code at such a basic level seems anathema. It seems C# culture is radically different on this point. And I wonder if I just need to accept and embrace that as part of my learning of C#. Which leads me to my question here: is C# development effectively inseparable from the IDE you use?

    Read the article

  • Git ignore deleted files

    - by Petah
    Ok heres my situation. I have a website project that has more than 50,000 unimportant files (to development) in some directories. /website.com/files/1.txt /website.com/files/2.txt /website.com/files/3.txt /website.com/files/etc.txt The stuff in /files is already in the repo. I want to delete all the files in /files on my local copy but I want git to ignore it so it doesn't delete them when I do a pull on the web server. Any ideas?

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Run Sinatra via a Rake Task to Generate Static Files?

    - by viatropos
    I'm not sure I can put this correctly but I'll give it a shot. I want to use Sinatra to generate static html files once I am ready to deploy an application, so the resulting final website would be pure static HTML. During development, however, I want everything to be dynamic so I can use Haml and straight Ruby code to make things fast/dry/clear. I don't want to use Jekyll or some of the other static site generators out there because they don't have as much power as Sinatra. So all I basically need to be able to do is run a rake task such as: rake sinatra:generate_static_files. That would run the render commands for Haml and everything else, and the result would be written to files. My question is, how do I do that with Sinatra in a Rake task? Can I do it in a Rake task? The problem is, I don't know how to include the Sinatra::Application in the rake task... The only other way I could think of doing it is using net/http to access a URL that does all of that, but that seems like overkill. Any ideas how on to solve this?

    Read the article

  • SQL 05 full-text query fails "Specified module could not be found."

    - by Dan Bailiff
    I'm running SQL 2005 on Windows XP. I have a database table that has full text searching enabled. I was able to build and even re-build the index. However, when I try to query it like this: Select * from fulltext_english WHERE CONTAINS(page_data, 'causes') I get this error: Msg 7619, Level 16, State 1, Line 1 The execution of a full-text query failed. "The specified module could not be found." Did I miss something on the install? Is this a dll issue? I've googled and binged for days and can't find anything similar to this message. Thanks!

    Read the article

  • Bulk Rename Tool is a Lightweight but Powerful File Renaming Tool

    - by Jason Fitzpatrick
    There’s no need to settle for overly simplistic file renaming tools as long as Bulk Rename Tool is around. It’s lightweight, insanely customizable, portable, and sure to make short work of any renaming task you throw at it. Bulk Rename Tool is a great portable application (available as an installed version if you crave context menu integration) that blasts through file renaming tasks. The main panel is intimidatingly packed with toggles and variables you can alter; this isn’t a one-click solution by any means. That said, once you get comfortable using the interface it’s lightening fast and extremely flexible. One tip that will save you an enormous amount of frustrating when you get started: make sure to highlight the files you want to change in the file preview window (located in the upper right corner) or else you won’t see the preview and won’t know if the changes you’re making in the control panel are yielding the file names you desire. Hit up the link below to read more and grab a copy; Bulk Rename Tool is free, Windows only. Bulk Rename Tool Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • How do I create an encrypted file system inside a file?

    - by darent
    Recently i've found this interesting tutorial: http://flossstuff.wordpress.com/2011/08/07/using-a-file-as-a-storage-device/ It explains how to create an empty file, format it as ext4, and mount it as a device. I'd like to know if it can be created as an encrypted ext4 file system. I've tried using palimpsest (the disk utility found in System menu) to format the already created file system but it doesn't works as it detects the file system being used. If I try to unmount the file system, it won't work neither because it doesn't detect the device (since it's not a real device like a hardrive or a usb drive). So my question is, is there an option to create the file system encrypted from the begining? I've used these commands: Create an empty file 200Mb size: dd if=/dev/zero of=/path/to/file bs=1M count=200 Make it ext4: mkfs -t ext4 file Mount it in a folder inside my home: sudo mount -o loop file /path/to/mount_point Is there any way the mkfs command creates the ext4 encrypted asking for a decryption password? I'm planing to use this as a way to encrypt files inside Dropbox. Thanks for your time.

    Read the article

  • Changing the Settings item description text color in Android?

    - by Cori
    I use a Samsung Vibrant, and one of the annoying bits that Samsung tossed on their TouchWiz skin was changing the color of the Settings menu item description text from white (AOSP) to light blue. This looks okay as long as the skin is bluey-themed, but most ROMs seem to be leaning toward Gingerbread clones... doesn't look good with the green. How can I change that settings description font color back to white, or even the orange it is in actual Android 2.3? Which xml file is the color property located in? It seems to also spread to all apps you install, too... the blue text.

    Read the article

  • what is a fast way to output h5py dataset to text?

    - by user362761
    I am using the h5py python package to read files in HDF5 format. (e.g. somefile.h5) I would like to write the contents of a dataset to a text file. For example, I would like to create a text file with the following contents: 1,20,31,75,142,324,78,12,3,90,8,21,1 I am able to access the dataset in python using this code: import h5py f = h5py.File('/Users/Me/Desktop/thefile.h5', 'r') group = f['/level1/level2/level3'] dset = group['dsetname'] My naive approach is too slow, because my dataset has over 20000 entries: # write all values to file for index in range(len(dset)): # do not add comma after last value if index == len(dset)-1: txtfile.write(repr(dset[index])) else: txtfile.write(repr(dset[index])+',') txtfile.close() return None Is there a faster way to write this to a file? Perhaps I could convert the dataset into a NumPy array or even a Python list, and then use some file-writing tool? (I could experiment with concatenating the values into a larger string before writing to file, but I'm hoping there's something entirely more elegant)

    Read the article

  • How to stop jQuery from returning tabs and spaces from formated code on .html() .val() .text() etc.

    - by brandonjp
    I've got an html table: <table><tr> <td>M1</td> <td>M2</td> <td>M3</td> <td>M4</td> </tr></table> and a simple jQ script: $('td').click(function(){ alert( $(this).html() ); }); That works just fine.... but in the real world, I've got several hundred table cells and the code is formatted improperly in places because of several people editing the page. So if the html is: <td> M1 </td> then the alert() is giving me all the tabs and returns and spaces: What can I do to get ONLY the text without the tabs and spaces? I've tried .html(), .val(), .text() to no avail. Thanks!

    Read the article

  • Is there a Windows utility that will let me do multiple programmatic find/replaces on text that I cu

    - by billmaya
    I've inherited some C# code that contains about a thousand lines of source that I need to modify, transforming it from this: newDataRow["to_dir"] = comboBox108.Text; To this: assetAttributes.Add("to_dir", comboBox108.Text); The lines occur in various places throughout the application in groups of 40 or 50. Modifying each line by hand in Visual Studio 2008 can be done but it's labor intensive and prone to errors. Is there a Windows utility out there that will let me cut and paste groups of code into it and then run some sort of reg-ex expression to transform the individual lines one-by-one? I'd also be willing to use some sort of VS 2008 add-in that performed the same set of reg-ex operations against a selection of code. Thanks in advance.

    Read the article

  • Why does my system slow down or freeze when there is heavy disk activity?

    - by user72270
    Im a first-time user to Ubuntu-12.04 with WUBI installation. My NoteBook Information : Dell vostro 3450 : i5 2410m, 3 gb ram, intel hd3000, amd 6630m hybrid. Surfing and playing games works flawlessly, however, I'm having huge problems when installing applications and generally copying and moving files. When doing so, system is significantly slower and freezes quite often (Firefox gets bluish, sometimes even black n white). I would say that Ubuntu allocates too much resources on file transfers and installing, but even these tasks are very slow. Here is very specific example : today, i tried to move 6 GB file from win 7 installation. It was good at first, i jumped to firefox but after a while firefox started to randomly turn bluish and mouse was randomly stopping working. It was gradually worse and worse and it got to a point when firefox black n whited and mouse wasn't working at all. I raged and went for some meal, when i got back screen was black. It probably unlogged me due to inactivity, when i pushed random button to bring screen to life i had to wait few minutes to let it show me only my screen background. No log in screen, just background and working mouse. NoteBook fan was working at 100 % so I assumed that file transfer was going on and I left it to work. Nothing then changed for a full hour so I hard rebooted it. File transfer unsuccessful, It transfered hardly 2 gigs. Is this normal ? What to do in these situations ? It didn't let me load system manager and not even terminal. Thanks.

    Read the article

  • Best Way to automatically compress and minimize JavaScript files in an ASP.NET MVC app

    - by wgpubs
    So I have an ASP.NET MVC app that references a number of javascript files in various places (in the site master and additional references in several views as well). I'd like to know if there is an automated way, and if so what is the recommended approach, for compressing and minimizing such references into a single .js file where possible. Such that this ... <script src="<%= ResolveUrl("~") %>Content/ExtJS/Ext.ux.grid.GridSummary/Ext.ux.grid.GridSummary.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.rating/ext.ux.ratingplugin.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext-starslider/ext-starslider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.dollarfield.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.combobox.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.datepickerplus/ext.ux.datepickerplus-min.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/SessionProvider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/TabCloseMenu.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/ActivityForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/UserForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/SwappedGrid.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/Tree.js" type="text/javascript"></script> ... could be reduced to something like this ... <script src="<%= ResolveUrl("~") %>Content/MyViewPage-min.js" type="text/javascript"></script> Thanks

    Read the article

  • Why do I get this error "EMCIDeviceError" when opening some wav files in my program.

    - by Roy
    Hey I have this program that has been working fine until I tried to open this one wav file? Not sure what the problem is or that I understand it? Do I need to find a new component to use for this file or what? I am using Delphi 4 Pro and the standard VCL component for Media Player. I am looking for a good new component that offers more help with wav and mp3 files too but not found what I am looking for yet?

    Read the article

  • How to lock files in a tomcat web application?

    - by yankee
    The Java manual says: The locks held on a particular file by a single Java virtual machine do not overlap. The overlaps method may be used to test whether a candidate lock range overlaps an existing lock. I guess that if I lock a file in a tomcat web application I can't be sure that every call to this application is done by a different JVM, can I? So how can I lock files within my tomcat application in a reliable way?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >