Search Results

Search found 23545 results on 942 pages for 'task parallel library'.

Page 14/942 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Move some iTunes library items to different drive?

    - by Sören Kuklau
    My internal hard drive is somewhat small, and I only regularly listen to a fraction of my iTunes library anyway, so I'd like to keep large portions on it on an external drive for archival purposes. Since dealing with multiple iTunes libraries is somewhat painful, the solution I'm looking for is to move individual items of the library to a different location, without compromising the "Keep organized" and "Copy files" settings. I found an AppleScript that I assume is supposed to do this, Move Files To Folder…, but it instead copies them, and doesn't update the library accordingly. I can do this manually by moving the file, then accessing it in iTunes — it'll prompt me for the new location. I just don't intend to do this one by one for thousands of files.

    Read the article

  • What is the standard place for static library files on Unix/Ubuntu

    - by Max
    Hi, I am trying to install a library manually, well actually just put it in a sensible location preferably in my LIB path. I have a lib[...].a file and a bunch of headers pertaining to that static library file. If I look under /usr/lib/ I see only .so files, likewise for /lib/, /lib32/ etc. I figure I could chuck it in there, but is there any place where it can get cozy with other .a files or is that as good place as any? I'm not an library expert, but I'm pretty sure it won't matter functionally, but I'd like to learn conventional best practice. Also, where is the standard place to put the headers? Thanks!

    Read the article

  • foobar2000 truncate media library

    - by MartinM
    Hi, I want to start using the media library feature of foobar2000 v1.0 and added one music folder, which contains a couple of subfolders with music of different albums. I restricted the filetypes to *.mp3;*.flac. Now my problem is that I don't want all albums/folders in my media library. But I can't seem to find an option to remove a particular album from the library. Do you have an idea what I can do? I thought about adding just the appropriate folders, but that would take a ages because one can only add one folder at a time :( Thanks -Martin

    Read the article

  • parallel computation for an Iterator of elements in Java

    - by Brian Harris
    I've had the same need a few times now and wanted to get other thoughts on the right way to structure a solution. The need is to perform some operation on many elements on many threads without needing to have all elements in memory at once, just the ones under computation. As in, Iterables.partition is insufficient because it brings all elements into memory up front. Expressing it in code, I want to write a BulkCalc2 that does the same thing as BulkCalc1, just in parallel. Below is sample code that illustrates my best attempt. I'm not satisfied because it's big and ugly, but it does seem to accomplish my goals of keeping threads highly utilized until the work is done, propagating any exceptions during computation, and not having more than numThreads instances of BigThing necessarily in memory at once. I'll accept the answer which meets the stated goals in the most concise way, whether it's a way to improve my BulkCalc2 or a completely different solution. interface BigThing { int getId(); String getString(); } class Calc { // somewhat expensive computation double calc(BigThing bigThing) { Random r = new Random(bigThing.getString().hashCode()); double d = 0; for (int i = 0; i < 100000; i++) { d += r.nextDouble(); } return d; } } class BulkCalc1 { final Calc calc; public BulkCalc1(Calc calc) { this.calc = calc; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { TreeMap<Integer, Double> results = Maps.newTreeMap(); while (in.hasNext()) { BigThing o = in.next(); results.put(o.getId(), calc.calc(o)); } return results; } } class SafeIterator<T> { final Iterator<T> in; SafeIterator(Iterator<T> in) { this.in = in; } synchronized T nextOrNull() { if (in.hasNext()) { return in.next(); } return null; } } class BulkCalc2 { final Calc calc; final int numThreads; public BulkCalc2(Calc calc, int numThreads) { this.calc = calc; this.numThreads = numThreads; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { ExecutorService e = Executors.newFixedThreadPool(numThreads); List<Future<?>> futures = Lists.newLinkedList(); final Map<Integer, Double> results = new MapMaker().concurrencyLevel(numThreads).makeMap(); final SafeIterator<BigThing> it = new SafeIterator<BigThing>(in); for (int i = 0; i < numThreads; i++) { futures.add(e.submit(new Runnable() { @Override public void run() { while (true) { BigThing o = it.nextOrNull(); if (o == null) { return; } results.put(o.getId(), calc.calc(o)); } } })); } e.shutdown(); for (Future<?> future : futures) { try { future.get(); } catch (InterruptedException ex) { // swallowing is OK } catch (ExecutionException ex) { throw Throwables.propagate(ex.getCause()); } } return new TreeMap<Integer, Double>(results); } }

    Read the article

  • Call HttpWebRequest in another thread as UI with Task class - avoid to dispose object created in Task scope

    - by John
    I would like call HttpWebRequest on another thread as UI, because I must make 200 request or server and downloaded image. My scenation is that I make a request on server, create image and return image. This I make in another thread. I use Task class, but it call automaticaly Dispose method on all object created in task scope. So I return null object from this method. public BitmapImage CreateAvatar(Uri imageUri, int sex) { if (imageUri == null) return CreateDefaultAvatar(sex); BitmapImage image = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { Byte[] buffer = new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); image = new BitmapImage { CreateOptions = BitmapCreateOptions.None, CacheOption = BitmapCacheOption.OnLoad }; image.BeginInit(); image.StreamSource = new MemoryStream(buffer); image.EndInit(); image.Freeze(); } }).Start(); return image; } How avoid it? Thank Mr. Jon Skeet try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; //new Task(() => //{ var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } //}).Start(); return new MemoryStream(buffer); } It return object which is null a than try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } }).Start(); return new MemoryStream(buffer); } Method above return null

    Read the article

  • Problem with creation of scheduled task from IIS6 on SR2003

    - by Morten Louw Nielsen
    Hi, I have also posted this question on stackoverflow, but will also try here, since it might be more system-related I am writing a webapplication using .NET. The webapp creates scheduled tasks using the System.Diagnostics.Process class, calling SCHTASKS.EXE with parameters. I have changed the identity on the app pool, to a specific domain user. The domain-user is local administrator on all the four webservers. From webserver01 I am creating tasks on webserver01 to webserver04. It works perfect for 3-5 days, but then it breaks. It gives me the following errormessage in a messagebox: "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." If I have the system in the broken state, and I change the identity of the app pool to Domain administrator, it works. As I change it back to my domain-user, it breaks again. If I reboot the server, it works again for the same amount of days, but will break again. It seems like a permission-related problem. I just don't understand why it works sometimes, and sometimes doesn't. I hope someone outthere has seen this problem! Looking forward to hear from you! Kind regards, Morten, Denmark

    Read the article

  • scheduled task share permissions

    - by Enriquev
    Hello, I would like to know if there is a way I can share : \\server\Scheduled Tasks On server 2003 with normal users, cause as far as I can tell it seems only administrators can see this share, is there anyway I can change this share's permission and add users or groups? I know I can change permission on the jobs themselves, but normal users don't see the folder at all, so they cant access the jobs... Thank You.

    Read the article

  • Cron - run task every 90 min.

    - by Cory J
    Trying to adjust a cron job to run every 90 min. It was previously running every 20 min, which was a simple cron job: */20 * * * * whatever To change it to every 90, it seems like I need to split it into 2 jobs, I've done this: 0 0,3,6,9 * * * whatever 30 1,4,7,10 * * * whatever Is this right? The job doesn't seem to kick off.

    Read the article

  • Group Policy Task Schedule deployed to User Configuration not working, works when in Computer Configuration?

    - by user80130
    I added a Scheduled Task on my Windows 2008 R2 Domain Controller in the Group Policy Manager: MyDomain Policy User Configuration Preferences Control Panel Settings Scheduled Tasks Basic Task, like starting notepad, when user unlocks his workstation. This should show up in the client workstation's task scheduler, but it dosn't. No errors or anything like that. If I use the "Computer Configuration" instead of "User Configuration" the task appears, and I'm able to run the task. I've tried the gpupdate /force followed by gpresult and checked the report, but it dosn't contain the GPO Scheduled Tasks I created? (again, does show up when using "Computer Configuration".) The issue is that I have to run the application in the current users context, and only on a specific Employee OU, and thereby limit this task only to Employee Workstations and not apply the application when the same employee log on to internal servers and such. Primary domain dontroller is a Windows 2008 R2, workstations Windows 7 Enterprise. What am I doing wrong ?

    Read the article

  • System 67 error scheduled task to transfer files

    - by grom
    Running directly on command line the batch script works. But when scheduled to run (Windows 2003 R2 server) as the local administrator, I get the following error: D:\ScadaExport\exported>ping 192.168.10.78 Pinging 192.168.10.78 with 32 bytes of data: Reply from 192.168.10.78: bytes=32 time=11ms TTL=61 Reply from 192.168.10.78: bytes=32 time=15ms TTL=61 Reply from 192.168.10.78: bytes=32 time=29ms TTL=61 Reply from 192.168.10.78: bytes=32 time=10ms TTL=61 Ping statistics for 192.168.10.78: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 10ms, Maximum = 29ms, Average = 16ms D:\ScadaExport\exported>net use Z: \\192.168.10.78\bar-pccommon\scada\ System error 67 has occurred. The network name cannot be found. Any ideas? Google is turning up nothing useful, just keep finding results relating to DNS etc, but using IP address here.

    Read the article

  • Windows scheduler - Tasks not running when user not logged in

    - by Glinkot
    I have Windows Server 2003, with schedules setup via Remote Desktop under one account. That account appears in the 'creator' column too. I have 'Run only if logged on' unticked. When I have logged in under that account and then 'disconnected' leaving the the session alive, the schedule runs. But every time the server is rebooted, the task again fails to run until I again log in and disconnect. Any KB fixes I've missed or issues I've overlooked? Normally I only discover the issue when a user tells me the schedule has stopped running so it's a real reliability issue. I'd also be happy with an answer suggesting an alternative scheduler with higher reliability. Thanks

    Read the article

  • Scheduled Jobs during hours of autumn time change

    - by NealWalters
    I'm wondering how other people deal with this scenario. What if you have a job scheduled to run at 1:30 am. In the autumn, when time changes, the hour of 1:00:00 to 1:59:59 repeats itself and so that job would run twice. Could be Windows Task Scheduler, SQL Agent or any other scheduling tool. Most of these tools seem to be based on machine time, not UTC time. If I told it to run the job at UTC time each night, then I wouldn't have the duplicate hour issue.

    Read the article

  • php startup error Invalid library (maybe not a PHP library) 'pcntl.so'

    - by And-y
    After searching for hours to solve my problem and found nothing helpful I ask my first question here. I want to compile and install php 5.3.17 cli with pcntl extension enabled on a Debian server. The installation was successfull but when I start php cli, the following error is displayed: PHP Warning: PHP Startup: Invalid library (maybe not a PHP library) 'pcntl.so' in Unknown\ on line 0 The following configure is used: './configure' '--prefix=/usr/share' '--datadir=/usr/share/php' '--bindir=/usr/bin' '--libdir=/usr/share' '--includedir=/usr/include' '--with-config-file-path=/etc/php5/cli' '--disable-cgi' '--enable-bcmath' '-- enable-inline-optimization' '--enable-mbstring' '--enable-mbregex' '--enable-pcntl' '--enable-sigchild' '--enable-shmop' '--enable-sysvmsg' '--enable-sysvsem' '--enable-sysvshm' '--with-mysql' '--with-imap' '--with-imap-ssl' '--with-kerberos' In the php.ini following options are set: extension_dir=/usr/lib/php5/20090626/ extension=pcntl.so I hope someone can help me.

    Read the article

  • Unable to remotely schedule tasks from the command line

    - by Eptin
    I'm on a Windows 7 machine, attempting to use the command line to schedule a task on another Windows 7 machine in my company's network. I have administrative-level credentials for both computers. With help from http://msdn.microsoft.com/en-us/library/windows/desktop/bb736357.aspx I have created this line to run on the command prompt: schtasks /Create /S machinename /U username /P password /SC ONCE /TN Test1 /TR C:\Windows\System32\calc.exe /ST 16:30 Whenever I launch that, I get the following error: ERROR: User credentials are not allowed on the local machine. How can I fix this?

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Overriding MSBuildExtensionsPath in the MSBuild task is flaky

    - by Stuart Lange
    This is already cross-posted at MS Connect: https://connect.microsoft.com/VisualStudio/feedback/details/560451 I am attempting to override the property $(MSBuildExtensionsPath) when building a solution containing a C# web application project via msbuild. I am doing this because a web application csproj file imports the file "$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets". This file is installed by Visual Studio to the standard $(MSBuildExtensionsPath) location (C:\Program Files\MSBuild). I would like to eliminate the dependency on this file being installed on the machine (I would like to keep my build servers as "clean" as possible). In order to do this, I would like to include the Microsoft.WebApplication.targets in source control with my project, and then override $(MSBuildExtensionsPath) so that the csproj will import this included version of Microsoft.WebApplication.targets. This approach allows me to remove the dependency without requiring me to manually modify the web application csproj file. This scheme works fine when I build my solution file from the command line, supplying the custom value of $(MSBuildExtensionsPath) at the command line to msbuild via the /p flag. However, if I attempt to build the solution using the MSBuild task in a custom msbuild project file (overriding MSBuildExtensionsPath using the "Properties" attribute), it fails because the web app csproj file is attempting to import the Microsoft.WebApplication.targets from the "standard" Microsoft.WebApplication.targets location (C:\Program Files\MSBuild). Notably, if I run msbuild using the "Exec" task in my custom project file, it works. Even more notably, the FIRST time I run the build using the "MSBuild" task AFTER I have run the build using the "EXEC" task (or directly from the command line), the build works. Has anyone seen behavior like this before? Am I crazy? Is anyone aware of the root cause of this problem, a possible workaround, or whether this is a legitimate bug in MSBuild?

    Read the article

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • Parallel Computing Features Tour in VS2010

    Just realized that I have not linked from here to a screencast I recorded a couple weeks ago that shows the API, parallel debugger and concurrency visualizer in VS2010. Take a few minutes to watch the VS2010 Parallel Computing Features Tour. Comments about this post welcome at the original blog.

    Read the article

  • Materials from Parallel Programming Pattern Presentation at Charlottesville .NET User Group Meeting

    - by John Blumenauer
    On Thursday, May 27, I had the privilege of presenting “A Look at Parallel Programming Patterns” at the Charlottesville .NET User Group’s monthly meeting.  Those folks in attendance had many great questions and were obviously very interested in what the Parallel Task Library has to offer.  The code and slides can be found HERE.  Thanks again to CHODOTNET for having me in town to speak.  If you experience any problems downloading the slides or code, please let me know.

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Parallel Computing Features Tour in VS2010

    Just realized that I have not linked from here to a screencast I recorded a couple weeks ago that shows the API, parallel debugger and concurrency visualizer in VS2010. Take a few minutes to watch the VS2010 Parallel Computing Features Tour. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >