Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 249/880 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Lose changed data in session

    - by user150528
    Our asp.net 2.0 application has a very long process (synchronized) before sending response back to client. I observed that a second request, exactly same the initial one, was sent after client IE8 waited response for a long period of time while our application was still processing the first request. I use page session with predefined key to store a flag when the initial request arrives and then starts long process while client IE waits for the response, so if second request comes in, our application checks the session value. After our application sets the session flag and starts processing, I use Fiddler “Abort Session” to abort the initial request, right away the second request (same as the first one) is sent automatically, but session value set earlier seems no longer exist. Any thoughts?

    Read the article

  • msvcrt: memory usage goes wild, but not under debugger

    - by al_miro
    I have a C++ code compiled with Intel compiler, 32bit, in MS VC6 mode, so using either msvcrt.dll or msvcrtd.dll. The process makes heavy memory allocation and deallocation. I monitor the memory usage with WMI and look at VirtualSize and WorkingSetSize. with debug runtime (msvcrtd.dll): virtual constant 1.7GB, working constant 1.2GB with non-debug runtime (msvcrt.dll): virtual raising 1.7-- 2.1GB, working raising 1.2-1.4GB with non-debug runtime but under debugger (windbg): virtual constant 1.7GB, working constant At 2.1 GB virtual the process is crashing (as expected). But why would the virtual usage increase only with (non-debug) msvcrt.dll and only if not under debugger? In all cases compilation flags are identical, only runtime libs are different.

    Read the article

  • Does anyone know a better alternative to MS Excel's Solver?

    - by tundal45
    My company has to crunch a lot of data and part of the process involves running the solver and plotting a graph through resulting data points. Obviously there is a lot of copy and paste involved and the whole process is shaky, error prone and all round cluster-fudge. I was wondering if there was an alternative to the solver that can be used so that even if we have to use excel to plot the final graph, there will be a lot less data that needs to be copied and pasted back and forth. It would be great especially if the tool could be easily integrated into a .NET application but I am open to suggestions that may require a little bit of code-fu to get this to work. Thanks!

    Read the article

  • How can I handle template dependencies in Template Toolkit?

    - by Smack my batch up
    My static web pages are built from a huge bunch of templates which are inter-included using Template Toolkit's "import" and "include", so page.html looks like this: [% INCLUDE top %] [% IMPORT middle %] Then top might have even more files included. I have very many of these files, and they have to be run through to create the web pages in various languages (English, French, etc., not computer languages). This is a very complicated process and when one file is updated I would like to be able to automatically remake only the necessary files, using a makefile or something similar. Are there any tools like makedepend for C files which can parse template toolkit templates and create a dependency list for use in a makefile? Or are there better ways to automate this process?

    Read the article

  • Background job with status in rails

    - by pepernik
    Hey. I would like to upload a file and then parse it. Because parsing can take up to 10min I installed delayed_job plugin and called parsing function through send_later function. I have to mention that this is an AJAX app. Imagine that you press an AJAX button that starts upload and after that the source is imported into the database. During the process I want to show the progress bar or message (importing...) and when it completes the task status changes to done. My question is: What is the best way to check for status of the process. What would you do? My idea is to have another controller actions "status" which look into the database and provide the right status.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Check if key is pressed using python (a daemon in the background)

    - by Nazarius Kappertaal
    I've created a python script in which an event needs to be executed each time I press the Super (or WinKey) on my keyboard. How can one achieve this without the python process being "focused" - as it is running in the background waiting for the key to be pressed to execute the event? I've seen a lot of posts around the web showing me how to read input - but they have all required one to have the process "focused" and none have showed me how to capture the Super (or WinKey) using a python script. I'm running Ubuntu 9.10.

    Read the article

  • Debug C# Windows Service

    - by Goober
    Scenario I've got a windows service written in C#. I've read all the google threads on how to debug it, but I still can't get it to work. I've run "PathTo.NetFramework\InstallUtil.exe C:\MyService.exe"........It said the install was successful, however when I run "Services.msc", The service isn't displayed at all, anywhere. If I go into Task Manager, there is a process called "MyService.vshost.exe".....pretty sure that's not it, because it's a service, not a process........Any suggestions and/or help? Greatly appreciated. Other I'm running VS2008.

    Read the article

  • Sample source code for processing messages of a window created by an external program?

    - by David
    I know I have to use SetWindowLongPtr with GWLP_WNDPROC and create my own WndProc that handles the message I want (such as WM_GETMINMAXINFO and modify the MINMAXINFO structure). However, because I want to do this for a window created by another program (like notepad.exe), I can't do this from my C#/WinForms program, I have to create a native C/C++ DLL that I have to inject in the the process that created the window. Can you provide a link or the sample code to do this (the native C++ DLL and the way to call it from C# and inject it into the external process)? Thank you

    Read the article

  • Can't use ProcessWindowStyle.Minimized to start Firefox instance minimized?

    - by Rob3C
    I'm having trouble using Process.Start with Firefox. I want to start a new instance of Firefox in a minimized window. The following works fine with Internet Explorer, notepad, etc.: ProcessStartInfo p = new ProcessStartInfo(); p.FileName = "iexplore.exe"; p.Arguments = "http://www.google.com"; p.WindowStyle = ProcessWindowStyle.Minimized; Process.Start(p); This opens IE in a new, minimized window. Good, just what I want. If I try the exact same thing but instead supply Firefox in the p.FileName, it opens Firefox in a "normal" window, rather than minimized. I've tried various changes to arguments, also have tried examining my local Firefox settings (under Tools/Options) with no luck. I'm sure I'm just missing something simple, but can't figure out what it is. If anyone can help me with getting Firefox opened in a minimized state it would be greatly appreciated!

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • How can I await the first completed async task of a list in .Net?

    - by Eyal
    My input is a long list of files located on an Amazon S3 server. I'd like to download the metadata of the files, compute the hashes of the local files, and compare the metadata hash with the local files' hash. Currently, I use a loop to start all the metadata downloads asynchronously, then as each completes, compute MD5 on the local file if needed and compare. Here's the code (just the relevant lines): Dim s3client As New AmazonS3Client(KeyId.Text, keySecret.Text) Dim responseTasks As New List(Of System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse))) For Each lvi As ListViewItem In lvStatus.Items Dim gomr As New Amazon.S3.Model.GetObjectMetadataRequest gomr.BucketName = S3FileDialog.GetBucketName(lvi.SubItems(2).Text) gomr.Key = S3FileDialog.GetPrefix(lvi.SubItems(2).Text) responseTasks.Add(New System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse))(lvi, s3client.GetObjectMetadataAsync(gomr))) Next For Each t As System.Tuple(Of ListViewItem, Task(Of GetObjectMetadataResponse)) In responseTasks Dim response As GetObjectMetadataResponse = Await t.Item2 If response.ETag.Trim(""""c) = MD5CalcFile(lvi.SubItems(1).Text) Then lvi.SubItems(3).Text = "Match" UpdateLvi(lvi) End If Next I've got two problems: I'm awaiting the reponses in the order that I made them. I'd rather process them in the order that they complete so that I get them faster. The MD5 calculation is long and synchronous. I tried making it async but the process locked up. I think that the MD5 task was added to the end of .Net's task list and it didn't get to run until all the downloads completed. Ideally, I process the response as they arrive, not in order, and the MD5 is asynchronous but gets a chance to run. Edit: Incorporating WhenAll, it looks like this now: Dim s3client As New Amazon.S3.AmazonS3Client(KeyId.Text, keySecret.Text) Dim responseTasks As New Dictionary(Of Task(Of GetObjectMetadataResponse), ListViewItem) For Each lvi As ListViewItem In lvStatus.Items Dim gomr As New Amazon.S3.Model.GetObjectMetadataRequest gomr.BucketName = S3FileDialog.GetBucketName(lvi.SubItems(2).Text) gomr.Key = S3FileDialog.GetPrefix(lvi.SubItems(2).Text) responseTasks.Add(s3client.GetObjectMetadataAsync(gomr), lvi) Next Dim startTime As DateTimeOffset = DateTimeOffset.Now Do While responseTasks.Count > 0 Dim currentTask As Task(Of GetObjectMetadataResponse) = Await Task.WhenAny(responseTasks.Keys) Dim response As GetObjectMetadataResponse = Await currentTask If response.ETag.Trim(""""c) = MD5CalcFile(lvi.SubItems(1).Text) Then lvi.SubItems(3).Text = "Match" UpdateLvi(lvi) End If Loop MsgBox((DateTimeOffset.Now - startTime).ToString) The UI locks up momentarily whenever MDSCalcFile is done. The whole loop takes about 45s and the first file's MD5 result happens within 1s of starting. If I change the line to: If response.ETag.Trim(""""c) = Await Task.Run(Function () MD5CalcFile(lvi.SubItems(1).Text)) Then The UI doesn't lock up when MD5CalcFile is done. The whole loop takes about 75s, up from 45s, and the first file's MD5 result happens after 40s of waiting.

    Read the article

  • How can a 1Gb Java heap on a 64bit machine use 3Gb of VIRT space?

    - by Graeme Moss
    I run the same process on a 32bit machine as on a 64bit machine with the same memory VM settings (-Xms1024m -Xmx1024m) and similar VM version (1.6.0_05 vs 1.6.0_16). However the virtual space used by the 64bit machine (as shown in top under "VIRT") is almost three times as big as that in 32bit! I know 64bit VMs will use a little more memory for the larger references, but how can it be three times as big? Am I reading VIRT in top incorrectly? Full data shown below, showing top and then the result of jmap -heap, first for 64bit, then for 32bit. Note the VIRT for 64bit is 3319m for 32bit is 1220m. * 64bit * PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22534 agent 20 0 3319m 163m 14m S 4.7 2.0 0:04.28 java $ jmap -heap 22534 Attaching to process ID 22534, please wait... Debugger attached successfully. Server compiler detected. JVM version is 10.0-b19 using thread-local object allocation. Parallel GC with 4 thread(s) Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 2686976 (2.5625MB) MaxNewSize = -65536 (-0.0625MB) OldSize = 5439488 (5.1875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 88080384 (84.0MB) Heap Usage: PS Young Generation Eden Space: capacity = 268500992 (256.0625MB) used = 247066968 (235.62142181396484MB) free = 21434024 (20.441078186035156MB) 92.01715277089181% used From Space: capacity = 44695552 (42.625MB) used = 0 (0.0MB) free = 44695552 (42.625MB) 0.0% used To Space: capacity = 44695552 (42.625MB) used = 0 (0.0MB) free = 44695552 (42.625MB) 0.0% used PS Old Generation capacity = 715849728 (682.6875MB) used = 0 (0.0MB) free = 715849728 (682.6875MB) 0.0% used PS Perm Generation capacity = 21757952 (20.75MB) used = 16153928 (15.405586242675781MB) free = 5604024 (5.344413757324219MB) 74.24378912132907% used * 32bit * PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30168 agent 20 0 1220m 175m 12m S 0.0 2.2 0:13.43 java $ jmap -heap 30168 Attaching to process ID 30168, please wait... Debugger attached successfully. Server compiler detected. JVM version is 14.2-b01 using thread-local object allocation. Parallel GC with 8 thread(s) Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 1048576 (1.0MB) MaxNewSize = 4294901760 (4095.9375MB) OldSize = 4194304 (4.0MB) NewRatio = 8 SurvivorRatio = 8 PermSize = 16777216 (16.0MB) MaxPermSize = 67108864 (64.0MB) Heap Usage: PS Young Generation Eden Space: capacity = 89522176 (85.375MB) used = 80626352 (76.89128112792969MB) free = 8895824 (8.483718872070312MB) 90.0629940005033% used From Space: capacity = 14876672 (14.1875MB) used = 14876216 (14.187065124511719MB) free = 456 (4.3487548828125E-4MB) 99.99693479832048% used To Space: capacity = 14876672 (14.1875MB) used = 0 (0.0MB) free = 14876672 (14.1875MB) 0.0% used PS Old Generation capacity = 954466304 (910.25MB) used = 10598496 (10.107513427734375MB) free = 943867808 (900.1424865722656MB) 1.1104107034039412% used PS Perm Generation capacity = 16777216 (16.0MB) used = 11366448 (10.839889526367188MB) free = 5410768 (5.1601104736328125MB) 67.74930953979492% used

    Read the article

  • Resolving patch conflicts manually

    - by Antony Hatchkins
    I've downloaded a patch from some site and trying to apply it (twisted, python web framework). Several hunks failed. How do I automate manual patching process using vim? Details: I'm trying to automate the process of applying failed hunks. Many tiny changes, each about adding/removing 1-2 chars. Difficult to see. I Have to create two new temporary files and :diffthis them manually to see the difference. Yes, outside VCS. I can imagine a neat way to deal with it using git, but I would prefer to avoid creating git repo for that.

    Read the article

  • VB.NET Application which can compile and run C programs

    - by Arjun Vasudevan
    These days I'm working on a VB.NET application which can be used to edit, compile and run C programs. Can someone tell me how I can call a cl.exe process from within my VB program and also that how do I run the program in the console widow itself. Presently I have only the editor ready. With that one can type in a program and save it with a ".c" extension. Now there are 2 buttons on my form - "Compile" and "Run". When the user clicks on the "Compile" button, the program should be passed to the cl.exe process and the errors should be displayed in another textbox or the DOS(black screen itself). And when the user clicks on the "Run" button, the ".exe" file which just got created should get executed.

    Read the article

  • Powershell GUI: Adding multiple instances of .tooltipseperate to menu

    - by obious
    So, I'm having a problem whereby for some reason I can only add one instance of a .tooltipseperator to a drop down menu. E.g. I have to create a new .tooltipseperator if I want to add another to a different menu list. I have this: $seperator = new-object System.Windows.Forms.Toolstripseparator $seperator1 = new-object System.Windows.Forms.Toolstripseparator which correlates to this: [Void]$packages_menu_bar.DropDownItems.Add($seperator1) [Void]$packages_menu_bar.DropDownItems.Add($Remove_from_AD) [Void]$process.DropDownItems.Add($Kill_process) [Void]$process.DropDownItems.Add($seperator) My question is this: how can I add the same instance on a .tooltipseperater to different menu items? Some sort of array? Thanks

    Read the article

  • Multiple webservices in 1 ear/ejb project

    - by arinte
    We have a ejb project (which is in an ear) that shares quite a bit of code between 2 webservices. The classes that the webservices expose are in different packages but they have different names. For example Web service1 com.d.trunk.Response WS1.process( com.d.trunk.Input ); Web service2 com.d.fwd.Response WS2.process( com.d.fwd.Input ); So this builds fine, but when we deploy and we view the generated wsdl and the generated xsd things begin to go a bit haywire. So if we look at web service 2 it generates the wsdl and xsd as we expect. But when we look at ws 1's wsdl for some reason it includes the xsd from the ws 2 and its own xsd. And its own xsd are missing key types like the Response type. Is this an issue because we have 2 web services in 1 ejb project? Or some config issue with Netbeans 6.7.1 and glassfish v2?

    Read the article

  • Automate paster create -t plone3_buildout

    - by roopesh
    I want to automate the process of plone3_buildout. Explanation: The default(the one I use) way of building a plone site is using paster, like so: paster create -t plone3_buildout This asks me a few questions and then create a default buildout for the site. What I want: I want to automate this process using buildout. My buildout will execute this paster command, feed in my preconfigured values to the paster. I haven't found a recipe which can do this. If someone has an idea of how to do this, please share the info. If there is a recipe which can feed values to interactive commands(with known output, like with plone3_buildout command), that would be useful too.

    Read the article

  • Cannot find sleep function

    - by Tyzak
    hello, i'm new at C Programming (i learned c++) i want to create a process with windows.h at first i just want to start my main programm that creates a process ( -- starts an other programm) that's my code, but it doesn't really work, i removed every unnessasery line of code but "void sleep(700)" (or "sleep (700)" for testing if the windows methods work, but i get an error, that "sleep" cant be found. #include <iostream> #include <windows.h> #include <string> using namespace std; void main() { //bool ret; //startupinfo stupinfo; //prozess_information pro2info; //Getstartupinfo (&stupinfo); //createprozess(null, "C:\\bsss10\\betriebssystemePRA1.exe", null, null, false, create_new_console, null, // null, &stupinfo, &pro2info); sleep (700); cout<< "hello"; } thanks in advance

    Read the article

  • Java multi-threading - what is the best way to monitor the activity of a number of threads?

    - by MalcomTucker
    I have a number of threads that are performing a long runing task. These threads themselves have child threads that do further subdivisions of work. What is the best way for me to track the following: How many total threads my process has created What the state of each thread currently is What part of my process each thread has currently got to I want to do it in as efficient a way as possible and once threads finish, I don't want any references to them hanging around becasuse I need to be freeing up memory as early as possible. Any advice?

    Read the article

  • Need help regarding Async and fsi

    - by Stringer Bell
    I'd like to write some code that runs a sequence of F# scripts (.fsx). The thing is that I could have literally hundreds of scripts and if I do that: let shellExecute program args = let startInfo = new ProcessStartInfo() do startInfo.FileName <- program do startInfo.Arguments <- args do startInfo.UseShellExecute <- true do startInfo.WindowStyle <- ProcessWindowStyle.Hidden //do printfn "%s" startInfo.Arguments let proc = Process.Start(startInfo) () scripts |> Seq.iter (shellExecute "fsi") it could stress too much my 2GB system. Anyway, I'd like to run scripts by batch of n, which seems also a good exercise for learning Async (I guess it's the way to go). I have written some code and unfortunately it doesn't work: open System.Diagnostics let p = shellExecute "fsi" @"C:\Users\Stringer\foo.fsx" async { let! exit = Async.AwaitEvent p.Exited do printfn "process has exited" } |> Async.StartImmediate foo.fsx is just a hello world script. I'd like also to figure out if it's doable to retrieve a return code for each executing script and if not, find another way. Thanks!

    Read the article

  • One click login to my google apps solution how can I do it?

    - by Ali
    Hi guys I'm developing a google apps solution and I'm building on the tutorial application give by google at http://code.google.com/googleapps/marketplace/tutorial_php.html - the thing is that the example given sets up the application such that the user has to enter in manually the email address or username and then it takes the user through an authentication process whereby it asks the user if they wish to allow the application to be given access to the services mentioned in the manifest.xml file. Isn't there a better way to do this I mean like I want that when the user upon logging into his google apps account just has to click in the application link installed and straight on should be able to enter the application. Can't the process of authentication and all be transparent in the backdrop. I need help on this asap. Thanks a bunch!

    Read the article

  • Customised email alerts through MailChimp API

    - by user1293351
    I am building a site that runs an automated process every 30 minutes to match up new flights with their respective user. Once this process is completed I want to email the flight details out to the respective user. However the flight info will be different for every single user with their being 0-300+ potential emails. Is this something that the MailChimp API will allow or do? I found this page http://apidocs.mailchimp.com/api/how-to/transactional-campaigns.php which I am not sure if this effects me. Is the STS more suited to this? http://apidocs.mailchimp.com/sts/1.0/ Thanks Alex

    Read the article

  • Windows Azure Training Kit (November 2010 Release Update)&ndash;Fantastic Azure training resource

    - by Jim Duffy
    At PDC 2010 in October Microsoft announced a number of new enhancements/features for Windows Azure. In case you missed it, these new enhancements/features have been released in the new Windows Azure Tools for Visual Studio November release (v1.3). The Windows Azure team blog is an excellent resource for information about the new release. Along with the new release the Azure team has also updated the Windows Azure Platform Training Kit. What is the Windows Azure Platform Training Kit you ask? It is a comprehensive set of hands-on training labs and videos designed to help you quickly get up to speed with Windows Azure, SQL Azure, and the Windows Azure AppFabric. The training kit contains updated labs including a couple I would suggest you hit first. Introduction to Windows Azure - updated to use the new Windows Azure platform Portal Introduction to SQL Azure - updated to use the new Windows Azure platform Portal The training kit contains a number of new labs as well including: Advanced Web and Worker Role – shows how to use admin mode and startup tasks Connecting Apps With Windows Azure Connect – shows how to use Project Sydney Virtual Machine Role – shows how to get started with VM Role by creating and deploying a VHD Windows Azure CDN – simple introduction to the CDN Introduction to the Windows Azure AppFabric Service Bus Futures – shows how to use the new Service Bus features in the AppFabric labs environment Building Windows Azure Apps with Caching Service – shows how to use the new Windows Azure AppFabric Caching service Introduction to the AppFabric Access Control Service V2 – shows how to build a simple web application that supports multiple identity providers Ok, that’s enough reading, go start learning! Have a day.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >