Search Results

Search found 100267 results on 4011 pages for 'user instance'.

Page 525/4011 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Windows startup Powershell script not closing after Start-Process

    - by Matthew Phipps
    I've got a Powershell V2.0 startup script for my work computer (XP Professional 64-bit), as follows: start "C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE" -ArgumentList "/recycle" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "https://mail.google.com" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "-new-window https://www.google.com/calendar" sleep -S 2 start "C:\Program Files (x86)\Skype\Phone\Skype.exe" The sleeps are to ensure that the windows appear on the taskbar in the correct order. I run this from a shortcut on my Quick Launch with the following Target: C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\scripts\initialize.ps1 (Yes, this is 2.0: powershell -Version 2.0 works, as does -Version 1.0, but not -Version 3.0) Problem is, the command window stays open until the Firefox windows are closed, which is not what I want. Looking at Process Explorer when I run the script, here's what happens: powershell.exe appears under explorer.exe and the Powershell window appears (with a black background, oddly. But it's not cmd.exe, since when I was debugging the script error messages would appear in red). outlook.exe appears under powershell.exe and the Outlook window appears. firefox.exe appears under powershell.exe and a Firefox window appears. A second firefox.exe appears under powershell.exe and another Firefox window appears. The second Firefox process then exits, as expected, since Firefox only uses one process. skype.exe appears under powershell.exe and the Skype window appears. The powershell.exe process inexplicably sticks around, as does the Powershell window. If I close both Firefox windows, the powershell.exe process exits and the Powershell window closes, and the outlook.exe and skype.exe processes appear under explorer.exe as expected. I suspect this has something to do with Firefox's standard input, output and error: I wouldn't expect Outlook or Skype to ever output anything to the console, but Firefox has command-line options that allow it to do so. I've looked over my about:config's user set values and didn't find anything suspicious. Finally, if I have a firefox.exe instance already running (started from the desktop shortcut) the problem doesn't occur (the powershell.exe process exits as it ought to). So what's going on here? I'm going to try adding -WindowStyle hidden to the shortcut next (gotta close this Firefox to test it), but I want to get to the bottom of this, if only to improve my understanding of how Windows consoles work.

    Read the article

  • Triple (3) Monitors under Linux

    - by widgisoft
    I have a 3 monitor setup (each 1680x1050) via an Nvidia NVS440 (2 GPUs, 2 outputs per GPU totalling 4 outputs); this works fine under Windows XP,7 but caused considerable headaches under Linux (Ubuntu 9.04). I had previously used an XFX 9600GT and the onboard XFX 9300GS to produce the same result but the card was noisy and power hungry and I was hoping that there was some magical switch in the NVS4400 that got rid of this annoying problem - turns out the NVS440 is just 2 cards on one physical PCB :-p (I searched the net high and low for people using this card under Linux but found nothing, if anything the card uses less power and is fan less so I was to benefit from it either way) Anyway, using either set up there were 5 solutions available: Have 3 separate X instances, all un joined Have 3 separate X instances, adjoined by Xinerama Have 2 separate X instances - One using twin-view, both adjoined by Xinerama Have 2 separate X instances - One using twin-view but no Xinerama Have a single Twin-view setup and leave the 3rd screen unplugged :-p The 4rd option, using 2 separate X instances and twinview (but no xinerama) was the best balance in terms of performance and usability but caused 2 really annoying issues You couldn't control (without altering the shortcuts) which screen an application opened onto - and once it was opened you couldn't move it to another screen without opening up terminal and forcing it to move Nvidia's overriding or falsifying of Xinerama breaks and the 2 screens joined by Twin view behave like a single huge screen causing popups to open in the middle of both screens and maximising of windows stretches to the width of the first 2 screens Firefox can only run one instance as the same user so having multiple firefox windows requires at least 2 users The second option "feels" like the right option, but OpenGL is basically disabled and playing any sort of game or even running anything graphical causes a huge performance drop and instability - even trying to run a basic emulator for gba or gens just causes the system to fall over. It works just enough to stare at your desktop and do nothing but as soon as you start doing some work - opening windows, dragging things around - running multiple copies of firefox it just really feels slow. The last open, only going dual screen works perfectly and everything performs as required, full GPU acceleration - two logical screen spaces - perfect, just make it work across GPUs like windows! :-p Anyway, I know RandR was supposed to pick up the slack when it would introduced GPU objects of sorts to allow multiple GPUs to be stitched together to create one huge desktop at a much deeper layer than Xinerama. I was wondering if this has now been fixed (I noticed X server 1.7 is out) and whether anyone has got it running successfully? Again, my requirements are: One huge desktop to drag any window across Maximising of windows to each screen (as XP does) Running fullscreen apps on the primary screen and disabling the mouse from moving onto the others or on all 3 stretched Finally as a side note; I am aware of the Matrox triple (and dual) head splitter but even the price they go for on eBay is more than I can afford atm, my argument: I shouldn't have to buy extra hardware to get something to work on Linux when it's something that's existed in the windows world for a long time (can you tell I don't get on with X :-p); If I had the cash I'd have bought the latest version of this box already (the new version finally supports large resolutions as the displays I have 1680x1050 each).

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • Successfully concatenating multiple videos

    - by wiseguydigital
    My mission is to create videos out of old web slideshows. To start with I have jpegs and audio files that worked as Flash slideshows in an old system, structured such as this: Audio structure my_audio_1.mp3 (this file is a 3 second mp3 of silence) my_audio_2.mp3 my_audio_3.mp3 my_audio_4 etc... roughly 30 mp3s per slideshow Image structure my_image_1.jpg (this acts as the opening slide) my_image_2.jpg my_image_3.jpg my_image_4. etc... roughly 30 images per slideshow. As there are almost 100 slideshows that must be converted to video, I have created a web-based interface using PHP to automate the process, that sits on a local system and attempts to combine the files using shell_exec(). The process uses the following workflow: Loop through each slide and make an avi or mpeg. So for instance my_mini_video_2.avi would be a video that consists of my_image_2.jpg and has a soundtrack of my_audio_2.mp3. This slide would last the length of my_audio_2.mp3. Join / stitch / concat all of the mini videos to create the final video (Using a combination of cat and either mencoder or ffmpeg (I have also tried avimerge but to no avail). Transcode the new 'master' video to various formats such as flv etc. I thought this would be simple and have been close on many occasions but it still won't work. I can't get past stage 2 as I can't get a perfect 'master' video. I have now experimented with Mencoder, FFMpeg and seem to have been through every combination I can think of. The problem is that the audio and visuals never sync, no matter what I try. Also, I have even tried created audio-less mini videos, joining the MP3s into one long MP3 using both cat and mp3wrap and then assigning the new long MP3 as the audio track, but this always produces either a very short file or a badly slowed down file and makes the female voiceover sound like a male boxer!!! There appears to be no problems at all with the original files. Does anybody have any experience in producing a video successfully from the same kind of starting point? Or any ideas on what I may be doing wrong? As an example: If I create silent mini-videos, and stitch them together into 'temp-master.mpg' and then join the MP3s together into single MP3 called 'temp-master-audio.mp3', the audio file's duration is 09:10 and the video file's duration is 08:35. They should be the same and the audio will seem sloooow. I haven't posted code as I have written lots and lots of combinations.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • Why won't jqGrid won't populate initially in Chrome

    - by Maxm007
    Hi, I've got a web page with a jqGrid that uses am xmlreader to populate itself with data that is spat out by a RoR service. The page loads fine in firefox and safari. In Chrome however I get a blank grid. Only when I change the sort order by clicking on the columns does it populate. <html> <head> <title>LocalFx</title> <link href="/stylesheets/main.css?1271423251" media="screen" rel="stylesheet" type="text/css" /> <link href="/stylesheets/redmond/jquery-ui-1.8.custom.css?1271404544" media="screen" rel="stylesheet" type="text/css" /> <link href="/stylesheets/ui.jqgrid.css?1265561560" media="screen" rel="stylesheet" type="text/css" /> <script src="/javascripts/jquery-1.3.2.min.js?1259426008" type="text/javascript"></script> <script src="/javascripts/i18n/grid.locale-en.js?1266140090" type="text/javascript"></script> <script src="/javascripts/jquery.jqGrid.min.js?1271437772" type="text/javascript"></script> <script type="text/javascript"> jQuery().ready(function() { jQuery("#list").jqGrid({ xmlReader: { root:"contracts", row:"contract", repeatitems:false, id:"id" }, jsonReader: { repeatitems:false, root:"contracts" }, datatype: 'xml', url:'http://localhost:3000/contracts/index/all.xml', mtype: 'GET', colNames:['User','B/S', 'Currency', 'Amount', 'Rate'], colModel :[ {name:'user', index:'username', width:100 , xmlmap:'user>username'} , {name:'side', index:'side', width:100 , xmlmap:'side'} , {name:'currency', index:'ccy', width:100 , xmlmap:'currency>ccy'} , {name:'amount', index:'amount', width:100 , xmlmap:'amount'}, {name:'rate', index:'rate', width:100 , xmlmap:'exchange-rate>rate'} ], pager: jQuery('#pager'), caption: 'Contracts', sortname: 'side', sortorder: "asc", viewrecords:true, rowNum:10, rowList:[10,20,30] }); $("#list").trigger("reloadGrid") }); </script> </head> <body> <table id="list" align="center" class="scroll"></table> <div id="pager" class="scroll" style="text-align:center;"></div> </body> </html> This is the xml: <contracts type="array"> <contract> <amount type="float">1000.0</amount> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <currency-id type="integer">488525179</currency-id> <id type="integer">18277852</id> <side>BUY</side> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <user-id type="integer">830138774</user-id> <exchange-rate> <contract-id type="integer">18277852</contract-id> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <denccy-id type="integer">890731696</denccy-id> <id type="integer">419011264</id> <numccy-id type="integer">488525179</numccy-id> <rate type="float">1.3</rate> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </exchange-rate> <user> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">830138774</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <username>John Doe</username> </user> <currency> <ccy>EUR</ccy> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">488525179</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </currency> </contract> <contract> <amount type="float">500.0</amount> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <currency-id type="integer">890731696</currency-id> <id type="integer">716237132</id> <side>SELL</side> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <user-id type="integer">830138774</user-id> <exchange-rate> <contract-id type="integer">716237132</contract-id> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <denccy-id type="integer">890731696</denccy-id> <id type="integer">861902380</id> <numccy-id type="integer">488525179</numccy-id> <rate type="float">1.3</rate> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </exchange-rate> <user> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">830138774</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <username>John Doe</username> </user> <currency> <ccy>GBP</ccy> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">890731696</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </currency> </contract> </contracts>

    Read the article

  • Why won't jqGrid populate initially in Chrome

    - by Maxm007
    Hi, I've got a web page with a jqGrid that uses am xmlreader to populate itself with data that is spat out by a RoR service. The page loads fine in firefox and safari. In Chrome however I get a blank grid. Only when I change the sort order by clicking on the columns does it populate. <html> <head> <title>LocalFx</title> <link href="/stylesheets/main.css?1271423251" media="screen" rel="stylesheet" type="text/css" /> <link href="/stylesheets/redmond/jquery-ui-1.8.custom.css?1271404544" media="screen" rel="stylesheet" type="text/css" /> <link href="/stylesheets/ui.jqgrid.css?1265561560" media="screen" rel="stylesheet" type="text/css" /> <script src="/javascripts/jquery-1.3.2.min.js?1259426008" type="text/javascript"></script> <script src="/javascripts/i18n/grid.locale-en.js?1266140090" type="text/javascript"></script> <script src="/javascripts/jquery.jqGrid.min.js?1271437772" type="text/javascript"></script> <script type="text/javascript"> jQuery().ready(function() { jQuery("#list").jqGrid({ xmlReader: { root:"contracts", row:"contract", repeatitems:false, id:"id" }, jsonReader: { repeatitems:false, root:"contracts" }, datatype: 'xml', url:'http://localhost:3000/contracts/index/all.xml', mtype: 'GET', colNames:['User','B/S', 'Currency', 'Amount', 'Rate'], colModel :[ {name:'user', index:'username', width:100 , xmlmap:'user>username'} , {name:'side', index:'side', width:100 , xmlmap:'side'} , {name:'currency', index:'ccy', width:100 , xmlmap:'currency>ccy'} , {name:'amount', index:'amount', width:100 , xmlmap:'amount'}, {name:'rate', index:'rate', width:100 , xmlmap:'exchange-rate>rate'} ], pager: jQuery('#pager'), caption: 'Contracts', sortname: 'side', sortorder: "asc", viewrecords:true, rowNum:10, rowList:[10,20,30] }); $("#list").trigger("reloadGrid") }); </script> </head> <body> <table id="list" align="center" class="scroll"></table> <div id="pager" class="scroll" style="text-align:center;"></div> </body> </html> This is the xml: <contracts type="array"> <contract> <amount type="float">1000.0</amount> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <currency-id type="integer">488525179</currency-id> <id type="integer">18277852</id> <side>BUY</side> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <user-id type="integer">830138774</user-id> <exchange-rate> <contract-id type="integer">18277852</contract-id> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <denccy-id type="integer">890731696</denccy-id> <id type="integer">419011264</id> <numccy-id type="integer">488525179</numccy-id> <rate type="float">1.3</rate> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </exchange-rate> <user> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">830138774</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <username>John Doe</username> </user> <currency> <ccy>EUR</ccy> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">488525179</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </currency> </contract> <contract> <amount type="float">500.0</amount> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <currency-id type="integer">890731696</currency-id> <id type="integer">716237132</id> <side>SELL</side> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <user-id type="integer">830138774</user-id> <exchange-rate> <contract-id type="integer">716237132</contract-id> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <denccy-id type="integer">890731696</denccy-id> <id type="integer">861902380</id> <numccy-id type="integer">488525179</numccy-id> <rate type="float">1.3</rate> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </exchange-rate> <user> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">830138774</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> <username>John Doe</username> </user> <currency> <ccy>GBP</ccy> <created-at type="datetime">2010-04-16T13:59:40Z</created-at> <id type="integer">890731696</id> <updated-at type="datetime">2010-04-16T13:59:40Z</updated-at> </currency> </contract> </contracts>

    Read the article

  • Windows telling me, the local security authority is internally inconsistent upon mounting a network drive

    - by acme
    Since ages I've mounted a network share (via samba to a Linux machine) in Windows 7 to access it via drive letter. This worked flawlessly so far. Until now. Suddenly I couldn't access the drive anymore. Windows was telling me the network name (I didn't remember the exact term) was already in use. So I disconnected and tried to connect again: net use Y: \\10.10.10.208\work After a long time I get a message saying "The Local Security Authority (LSA) database contains an internal inconsistency" A restart didn't help. The mapped share is accessible (works on other machines in the same network), so obviously something strange is going on on my machine. Can anyone tell me how I can fix this inconsistency? Update: All machines that have saved the login information refuse with this error. So it must be something with the authorization. When I use net use Y: \\10.10.10.208\work /user:raphael it prompts me for the password and then returns that error message.

    Read the article

  • How to import .ops file in Outlook 2007

    - by r0ca
    Hi all, I install ORK.exe on my Windows 7 machine to create a copy of my profile so I will have it as a template to push it to new user. So basically, I ran the Profile Wizard to create an .ops file. I saved it in my documents for the moment. Right now, I'm trying to import that file into my Outlook but I just can't figure it out! My goal is to have a profile template to push to new users when they log in for first time. Thanks a bunch in advance! EDIT: I just figured it out. But I have a problem: When I create my profile, Outlook 2007 on exchange was not in cache mode. So I created my profile with the Profile Wizard with cache mode disabled. I save my outlook.ops in My documents. Then, I enabled cache mode so when I will import my .ops file, it should be restore as cache mode disabled... But When I do that, cache mode is still checked (which I don't want to) Am I doing something wrong? I guess so but I just can't figure it out

    Read the article

  • Why is firefox 4 not HW accelerated?

    - by acidzombie24
    At first i thought it was my computer but then i tried chrome. Why isnt firefox not hardware accelerated? The first screenshot shows chrome at 23% usage. The 2nd shows 59%. I have 2 cpus which is why it isnt 100% usage. The game pictured is biolab Heres the text for about:support Application Basics Name Firefox Version 4.0 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0 Profile Directory Open Containing Folder Enabled Plugins about:plugins Build Configuration about:buildconfig Extensions Name Version Enabled ID Modified Preferences Name Value accessibility.typeaheadfind.flashBar 0 browser.places.importBookmarksHTML false browser.places.smartBookmarksVersion 2 browser.startup.homepage_override.buildID 20110303194838 browser.startup.homepage_override.mstone rv:2.0 extensions.lastAppVersion 4.0 gfx.font_rendering.directwrite.enabled true network.cookie.prefsMigrated true places.history.expiration.transient_current_max_pages 127602 privacy.sanitize.migrateFx3Prefs true Graphics Adapter Description Mobile Intel(R) 4 Series Express Chipset Family Vendor ID 8086 Device ID 2a42 Adapter RAM Unknown Adapter Drivers igdumd64 igd10umd64 igdumdx32 igd10umd32 Driver Version 8.15.10.2202 Driver Date 8-25-2010 Direct2D Enabled true DirectWrite Enabled true (6.1.7600.16385, font cache n/a) WebGL Renderer Google Inc. -- ANGLE -- OpenGL ES 2.0 (ANGLE 0.0.0.541) GPU Accelerated Windows 1/1 Direct3D 10

    Read the article

  • Good bug tracking with Sharepoint?

    - by torbengb
    At my place of work, it has been decided to move many processes to Sharepoint. I'm now looking into how Sharepoint can be used for bug tracking (à la Mantis, FogBugz etc. but within Sharepoint). Specifically, we're using a collaboration room and the solution must work inside that. I know that I can create lists using an "Issue tracker" template, but it lacks workflow, integrated correspondence (like FogBugz), and audit log (any user can edit any field any time, without it being noted anywhere). That's not sufficient, so I am looking for "bigger" solutions but haven't yet found anything at all. This question is similar but aims at Helpdesk use; we aim at bug tracking and change requests to a system. I'm open to suggestions! As I'm not an administrator, I can't just grab a Sharepoint component and install it for testing. I'm looking for experiences, documentation, white papers, screen shots -- the actual downloadable will be relevant later. Ideally, some of these matters should be covered: Support for different ticket types (bug, feature, inquiry, internal task). Configurable workflow per ticket type, no fixed number of steps. Configurable read/write permissions per field and per workflow status. Configurable dashboard for managers with nice charts. Configurable email notifications. Correspondence à la FogBugz. (Challenge: we use Notes, not Exchange.)

    Read the article

  • Outlook 2007 Backup to D:\Outlook Fails - Access Denied, Write-Protected or File In Use

    - by nicorellius
    I can successfully save the Outlook PST file to the default location on the C drive (C:\Documents and Settings\user\ ... \Outlook) but when I change the backup save to directory to Outlook on the D drive I get the error: Cannot copy Outlook: Access is denied. Make sure the disk is not full or write protected and that the file is not currently in use. I suppose it is not that crucial that I save this file here, but I have never seen this problem before and I have made this same change in the past. I did some searching in this knowledge exchange as well as elsewhere on changing permissions, etc, but this didn't help. I discovered that the folder on my D drive (called Outlook) is not write-protected and nor is it read-only, as I can save to and modify files in that directory, as well as rename and delete the directory itself. At the time when I installed this version of Outlook, I used a previously saved Personal Folder (a backup PST file) and I thought having this still open in Outlook was causing the trouble. But I closed it and still have the same problem. I know this is probably a silly error on my part but I would like to figure it out. I'm new to superuser, but the answers I see are usually very good, so I thought I would post my first question. Thanks in advance.

    Read the article

  • Windows Task Scheduler broken: "service is not available"

    - by 2371
    Problem I'm trying to run Windows Task Scheduler from the start menu (the command is %SystemRoot%\system32\taskschd.msc /s) but as of very recently, I'm getting an error: ![Task Scheduler service is not available. Verify that the service is running.][1] I made some screenshots but I can't embed them unfortunately because I don't have enough reputation points yet: http://i.imgur.com/7rPXf.png and Mmddy.png and wonnF.png The window then opens as usual except no tasks are displayed and the error "Reading Data Failed" is shown on a few of the panels. ![second screenshot][2] Possible Causes ran rpccfg -a 1 and netsh rpc add 127.0.0.1 changed PC name twice while computer was still loading installed and used DeltaCopy installed Adobe AIR installed Warsow I can't think of any other system changes I've made. Things Tried ran rpccfg with the parameter to reset to defaults ran netsh with the parameter to reset to defaults uninstalled DeltaCopy forced the service to restart. The service and its dependencies appear started and looked normal before and after connecting to "another computer" from inside the program and entering credentials for the current machine. This said access denied yesterday but today it says "Connecting as another user is only supported when connecting to a computer running Windows Vista™ or later." and partially works but doesn't show my tasks. ![third screenshot][3] but I am on Vista! Please help!

    Read the article

  • Problem with XLSX file on Office 2003 with Compatibility Pack

    - by MadBoy
    I've this machine which has Office 2003 installed with Compatibility Pack. User received one file which she has to work with in XLSX format (file has to stay in that format so options to save it as XLS can be skipped). When she opens the file from Desktop or any other location it gives an error like "Cannot find the file in following location" and in the background it starts Converting XLSX file to XLS. In the end Excel is opened up with some random file name X000008.xls (Read Only) which is just 1 to 1 conversion of the XLSX document. However if I go directly to Excel and use File / Open and try to open the XLSX file no error or conversion is done. The file is writable and can be saved in XLSX format. The file name is simple like This_file something.XLSX, I've even tried to remove all spaces but it gave no better results. Anyone has any recommendations? So far I have done: 1. Uninstalled Compatibility Pack and installed it again. Tried opening it without any other updates and the error still pops out. In progress: I am now running all possible updates of office (like 5 of them) which i suppos won't fix anything (as they were applied when I've uninstalled compatibility pack). Next in run will be (not yet done) installing Windows XP SP3. What other things I can do?

    Read the article

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by Ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • How do I stop Sophos anti virus from scanning directories that are under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely the need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • Why do I get "unsupported architecture" errors trying to install a Python library in OSX?

    - by Emma518
    I am trying to install a Python library in the Presto package, source http://www.cv.nrao.edu/~sransom/presto/ Using 'gmake fftfit' I get the following error: cd fftfit_src ; f2py-2.7 -c fftfit.pyf *.f running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "fftfit" sources creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 f2py options: [] f2py: fftfit.pyf Reading fortran codes... Reading file 'fftfit.pyf' (format:free) Post-processing... Block: fftfit Block: cprof Block: fftfit Post-processing (stage 2)... Building modules... Building module "fftfit"... Constructing wrapper function "cprof"... c,amp,pha = cprof(y,[nmax,nh]) Constructing wrapper function "fftfit"... shift,eshift,snr,esnr,b,errb,ngood = fftfit(prof,s,phi,[nmax]) Wrote C/API module "fftfit" to file "/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64- 2.7/fftfitmodule.c" adding '/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fortranobject.c' to sources. adding '/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7' to include_dirs. copying /Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9- intel.egg/numpy/f2py/src/fortranobject.c -> /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 copying /Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9-intel.egg/numpy/f2py/src/fortranobject.h -> /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_ext building 'fftfit' extension compiling C sources C compiler: /usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch ppc -arch i386 -arch x86_64 -g -O2 creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T/tmp9MmLz8 creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 compile options: '-I/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9- x86_64-2.7 -I/Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9- intel.egg/numpy/core/include - I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' clang: /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64- 2.7/fftfitmodule.c In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/ 5.1/include/limits.h:38: In file included from /usr/include/limits.h:63: /usr/include/sys/cdefs.h:658:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/ 5.1/include/limits.h:38: In file included from /usr/include/limits.h:64: /usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: In file included from /usr/include/sys/_types.h:33: /usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: /usr/include/sys/_types.h:94:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /usr/include/sys/_types.h:95:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ /usr/include/sys/_types.h:96:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_dev_t; /* dev_t */ ^ /usr/include/sys/_types.h:99:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:100:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ /usr/include/sys/_types.h:101:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /usr/include/sys/_types.h:107:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /usr/include/sys/_types.h:109:9: error: unknown type name '__uint16_t' typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ /usr/include/sys/_types.h:110:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /usr/include/sys/_types.h:111:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:131:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ /usr/include/sys/_types.h:132:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ /usr/include/sys/_types.h:133:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ /usr/include/sys/_types.h:134:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:71: /usr/include/sys/_types/_va_list.h:31:9: error: unknown type name '__darwin_va_list'; did you mean '__builtin_va_list'? typedef __darwin_va_list va_list; ^ note: '__builtin_va_list' declared here In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:72: /usr/include/sys/_types/_size_t.h:30:9: error: unknown type name '__darwin_size_t'; did you mean '__darwin_ino_t'? typedef __darwin_size_t size_t; ^ /usr/include/sys/_types.h:103:26: note: '__darwin_ino_t' declared here typedef __darwin_ino64_t __darwin_ino_t; /* [???] Used for inodes */ ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/5.1/include/limits.h:38: In file included from /usr/include/limits.h:63: /usr/include/sys/cdefs.h:658:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/ 5.1/include/limits.h:38: In file included from /usr/include/limits.h:64: /usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: In file included from /usr/include/sys/_types.h:33: /usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: /usr/include/sys/_types.h:94:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /usr/include/sys/_types.h:95:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ /usr/include/sys/_types.h:96:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_dev_t; /* dev_t */ ^ /usr/include/sys/_types.h:99:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:100:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ /usr/include/sys/_types.h:101:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /usr/include/sys/_types.h:107:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /usr/include/sys/_types.h:109:9: error: unknown type name '__uint16_t' typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ /usr/include/sys/_types.h:110:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /usr/include/sys/_types.h:111:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:131:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ /usr/include/sys/_types.h:132:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ /usr/include/sys/_types.h:133:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ /usr/include/sys/_types.h:134:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:71: /usr/include/sys/_types/_va_list.h:31:9: error: unknown type name '__darwin_va_list'; did you mean '__builtin_va_list'? typedef __darwin_va_list va_list; ^ note: '__builtin_va_list' declared here In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:72: /usr/include/sys/_types/_size_t.h:30:9: error: unknown type name '__darwin_size_t'; did you mean '__darwin_ino_t'? typedef __darwin_size_t size_t; ^ /usr/include/sys/_types.h:103:26: note: '__darwin_ino_t' declared here typedef __darwin_ino64_t __darwin_ino_t; /* [???] Used for inodes */ ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: Command "/usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch ppc -arch i386 -arch x86_64 -g -O2 -I/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7 -I/Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9- intel.egg/numpy/core/include - I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.c -o /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.o" failed with exit status 1 Makefile:5: recipe for target 'fftfit' failed gmake: *** [fftfit] Error 1 How can I solve this architecture problem?

    Read the article

  • SSH error 114 when connect with FinalBuilder 7

    - by mamcx
    I'm testing FB 7 and try to connect to my Mac OS X Snow Leopard machine. I can connect with paramiko (python SSH library) but not FB7. The only thing I get is: SSH error encoutered: 114 I try stopping & restarting the share session on Mac OS X. update: I enable server debug and get this log: debug1: sshd version OpenSSH_5.2p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-Dd' debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: fd 5 clearing O_NONBLOCK debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 10.3.7.135 port 49457 debug1: Client protocol version 2.0; client software version SecureBlackbox.8 debug1: no match: SecureBlackbox.8 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: privsep_preauth: successfully loaded Seatbelt profile for unprivileged child debug1: permanently_set_uid: 75/75 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr [email protected] none debug1: kex: server->client aes128-ctr [email protected] none debug1: expecting SSH2_MSG_KEXDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user mamcx service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "mamcx" Connection closed by 10.3.7.135 debug1: do_cleanup debug1: PAM: setting PAM_RHOST to "10.3.7.135" debug1: do_cleanup debug1: PAM: cleanup debug1: audit_event: unhandled event 12

    Read the article

  • Error In centos6 while compiling java classes,in tomcat6

    - by AJIT RANA
    I am newbie to Linux and Centos6. I bought server just now and want to deploy my web app in it. I am getting error while I am compiling my servlet classes. It showing me bash: javac: command not found when I try to compile my classes. But when I checked my class in '/usr/lib/jvm/java-1.6.0/bin .'I found my javac there. Then I checked javac with the help of command ./javac i got ERROR.. [root:ip_address.com]# ./javac There is insufficient memory for the Java Runtime Environment to continue. pthread_getattr_np Error occurred during initialization of VM java.lang.OutOfMemoryError: unable to create new native thread I followed the step as shown in "" Java outofmemoryerror when creating <100 threads "" which shows me command to get limits.. '[root:ipaddress.com]# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 278528 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited' [root:ipaddress.com]# top bash: top: command not found link :- http://stackoverflow.com/q/12913857/1746764

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Appcrash and possible malware

    - by Chris Lively
    First off, I'm running MS Intune Endpoint Protection. It is completely up to date. On 10/25 @ 11:53PM I came across a site that caused Intune to freak out: Microsoft Antimalware has detected malware or other potentially unwanted software. For more information please see the following: http://go.microsoft.com/fwlink/?linkid=37020&name=Trojan:Win64/Sirefef.B&threatid=2147646729 Name: Trojan:Win64/Sirefef.B ID: 2147646729 Severity: Severe Category: Trojan Path: file:_C:\Windows\System32\consrv.dll Detection Origin: Local machine Detection Type: Concrete Detection Source: Real-Time Protection User: NT AUTHORITY\SYSTEM Process Name: C:\Windows\explorer.exe Signature Version: AV: 1.115.526.0, AS: 1.115.526.0, NIS: 10.7.0.0 Engine Version: AM: 1.1.7801.0, NIS: 2.0.7707.0 I, of course, elected to simply delete the file. Since then my machine has been randomly giving an error about "Host Process for Windows Services" stopped working. There are generally two different pieces of info: Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: BEX64 Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: StackHash_52d4 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 000062bdabe00000 Exception Code: c0000005 Exception Data: 0000000000000008 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 52d4 Additional Information 2: 52d47b8b925663f9d6437d7892cdf21b Additional Information 3: ed24 Additional Information 4: ed24528f3b69e8539b5c5c2158896d3e and Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: APPCRASH Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: mshtml.dll Fault Module Version: 9.0.8112.16437 Fault Module Timestamp: 4e5f1784 Exception Code: c0000005 Exception Offset: 00000000002ed3c2 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 3e9e Additional Information 2: 3e9e8b83f6a5f2a25451516023078a83 Additional Information 3: 432a Additional Information 4: 432a0284c502cce3bbb92a3bd555fe65 Intune claims the machine is clean. I've also tried some of the online scanners like trendmicro, all of which claimed the system is clean. Finally, I tried the "sfc /scannow" and it said all was good. I left my machine on after I left last night and there were about 50 of those messages. Ideas on how to proceed?

    Read the article

  • Windows 7 Backup not backing up custom library?

    - by James McMahon
    I have created a custom Library under Windows 7 64bit professional to handle my source code. When I tried Windows Backup and Restore for the first time I get the following error Backup encountered a problem while backing up file C:\Windows\System32\config\systemprofile\Source. Error:(The system cannot find the file specified. (0x80070002)) I've found a thread on the error on the Microsoft answers site. But it appears to be 404 (there is a version in Google's Cache) and the thread starter never gets an answer to his issue that works. The official Microsoft answer on this is This problem is due to one or more profiles under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList with missing ProfileImagePath. To check whether you have missing profiles: Open regedit, navigate to the above registry key. (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList). Expand the list Click on each of the profiles listed. The first 3 profiles should have ProfileImagePath value of %SystemRoot%\System32\Config\SystemProfile, %SystemRoot%\ServiceProfiles\LocalService, and %SystemRoot%\ServiceProfiles\NetworkService respectively. Starting from the 4th profile, the ProfileImagePath should contain path to the user profiles on your machine, such as C:\users\Christine If one or more of the profile has no profile image, then you have missing profiles. To work around this, delete the profile in question (Caution: The registry contains critical settings that are necessary for your system to function properly. Take extra caution while making changes) First, export the ProfileList key for safekeeping. (Right click on the key, choose “Export”, and save it to the desktop.) Right click on the profile in question, choose delete. Try backup again. This does not work for me. Anyone have any idea what is going on here?

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • Screen scraping software that will traverse pages

    - by nilbus
    We're creating a mashup site that pulls information from many sources all over the web. Many of these sites don't provide RSS feeds or APIs to access the information they provide. This leaves us with screen scraping as our method for collecting the data. There are many scripting tools out there written in different scripting languages for screen scraping that require you to write scraping scripts in the language the scraper was written in. Scrapy, scrAPI, and scrubyt are a few written in Ruby and Python. There are other web-based tools I've seen like Dapper that create XML or RSS feeds based on a webpage. It has a beautiful web-based interface that requires no scripting skills to use. This would be a great tool, if it were able to traverse multiple pages to gather data from hundreds pages of results. We need something that will scrape information from paginated web sites, much like scrubyt, but with a user interface that a non-programmer could use. We'll script up our own solution if we need to, probably using scrubyt, but if there's a better solution out there, we want to use it. Does anything like this exist?

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >