Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 283/976 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Exception when ASP.NET attempts to delete network file.

    - by Jordan Terrell
    Greetings - I've got an ASP.NET application that is trying to delete a file on a network share. The ASP.NET application's worker process is running under a domain account (confirmed this by looking in TaskManager and by using ShowContexts2.aspx¹). I've been assured by the network admins that the process account is a member of a group that has Modify permissions to the directory that contains the file I'm trying to delete. However, it is unable to do so, and instead I get an exception (changed the file path to all x's): System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.UnauthorizedAccessException: Access to the path '\xxxxxxx\xxxxxxx\xxxxxxx\xxxxxx.xxx' is denied. Any ideas on how to diagnose/fix this issue? Thanks - Jordan ¹ http://www.leastprivilege.com/ShowContextsNET20Version.aspx

    Read the article

  • Understanding this Pascal-FC threaded code

    - by dmindreader
    **Program Parcial2; type buffer = channel of integer; var buffers : array [1..2] of buffer; val:integer; process sleeper (id:integer); var i : integer; begin for i:=1 to 10 do begin sleep (random(10*id)); **buffers (id):any;** end; end; process troll; begin **buffers[1]: random(10);** end;** What are buffers(id):any and buffers[1]:random(10) doing?

    Read the article

  • post commit hook fail

    - by jarad mayers
    I have Master/Slave setup using Win2k8R with SVN 1.6.9 and using TortoiseSVN 1.6.7. The access is through Apache and using http. Everything works but when I commit I get the following message: Error: post-commit hook failed (exit code 1) with output: Error: The process cannot access the file because it is being used by another process. This happen when using multiple TortoiseSVN dialog for committing the files in rapid succession. If I use one TortoiseSVN dialog and wait till the commit reply is back then I won't see the problem. In other words, committing one at the time cause no issue. The post-commit script output is logged. Even though I get the above error but when I check the Master and Slave repository the files have been replicated okay with no issue. I am wondering how this issue can be solved.

    Read the article

  • Pipe data from InputStream to OutputStream in Java

    - by Wangnick
    Dear all, I'd like to send a file contained in a ZIP archive unzipped to an external program for further decoding and to read the result back into Java. ZipInputStream zis = new ZipInputStream(new FileInputStream(ZIPPATH)); Process decoder = new ProcessBuilder(DECODER).start(); ??? BufferedReader br = new BufferedReader(new InputStreamReader( decoder.getInputStream(),"us-ascii")); for (String line = br.readLine(); line!=null; line = br.readLine()) { ... } What do I need to put into ??? to pipe the zis content to the decoder.getOutputStream()? I guess a dedicated thread is needed, as the decoder process might block when its output is not consumed.

    Read the article

  • How can I use Code Contracts in a C++/CLI project?

    - by Daniel Wolf
    I recently stumbled upon Code Contracts and have started using them in my C# projects. However, I also have a number of projects written in C++/CLI. For C# and VB, Code Contracts offer a handy configuration panel in the project properties dialog. For a C++/CLI project, there is no such panel. From the documentation, I got the impression that adding Code Contracts support to a C++/CLI project should be a simple matter of calling some external tools as part of the build process (namely ccrefgen.exe, cccheck.exe, and ccrewrite.exe). However, the number of command line options and restrictions concerning the call sequence have me somewhat intimidated. Can anybody point me to a simple way to run the Code Contracts tools as an automated part of the build process in Visual Studio?

    Read the article

  • How to automate BlackBerry debugging with Eclipse?

    - by pajton
    I am developing application for BlackBerry 8900 and I am using features that force me to test/debug it on the real device. I am looking for a convenient way to be able to automate build-deploy-lanuch process. The process is: Package application & sign it Load it on the device Start debugging session in Eclipse With the newest version of BlackBerry plugin for Eclipse, step 1. is almost painless, but I would like to get rid of dialogs that I am missing some debug files. Step 2. and step 3. must be performed manually. Ideally I would like to turn it all into one script, Eclipse macro, whatever.... Has anyone tried something like this with any success?

    Read the article

  • Start Default Browser - Windows

    - by dbasnett
    When starting the default browser like this: Dim trgt1 As String = "http://www.vbforums.com/showthread.php?t=612471" pi.FileName = trgt1 System.Diagnostics.Process.Start(pi) It takes about 40 seconds to open the page. If I do it like this, though this isn't the default browser Dim trgt1 As String = "http://www.vbforums.com/showthread.php?t=612471" pi.Arguments = trgt1 pi.FileName = "iexplore.exe" 'or firefox.exe System.Diagnostics.Process.Start(pi) it opens immediately. Is this a bug or a feature? I have tried this with both IE and FireFox set to be the default browser.

    Read the article

  • why create "EventType clr20r3, P1 w3wp.exe" but don't have detail description of this unhandled exce

    - by Weixiao.Fan
    On the production server, I can see event from system Event Viewer when an asp.net app crash: *EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d691cc, P4 app_web_default.aspx.cdcab7d2, P5 0.0.0.0, P6 4b2e4bf0, P7 4, P8 4, P9 system.dividebyzeroexception, P10 NIL.* it belongs to ".NET Runtime 2.0 Error Reporting" category. but I can't find a event which belongs to "ASP.NET 2.0.50727.0" which can give me this exception a detail view: *An unhandled exception occurred and the process was terminated. Application ID: /LM/W3SVC/505951206/Root Process ID: 1112 Exception: System.DivideByZeroException Message: Attempted to divide by zero. StackTrace: at _Default.Foo(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack) at System.Threading.ThreadPoolWaitCallback.PerformWaitCallback(Object state) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp I can find these two event on my dev machine, because of Visual Studio installing? If so, how can I disable this so I can emulate production environment? Great thanks and best regards, Fan

    Read the article

  • How to identify a selected slide is a master slide in PowerPoint 2003 Programmatically

    - by Gayan
    Recently I was working with a code to open a PowerPoint presentation (by vb.net) object and process each slide by slide. If processing slide is not null or a master slide I need to skip and go to the next one. Can anyone show me how to check whether a given slide is a master slide? Is there any way to check it by slide type? Public Sub CheckForProprtychecker(ByVal Presn As PowerPoint.Presentation) For SlideIndex As Integer = 1 To Presn.Slides.Count() If Presn.Slides(SlideIndex) Is Nothing Then Continue For End If ‘do other process Next End Sub

    Read the article

  • How can I reuse my javascript code between client and server?

    - by Chris Farmer
    I have some javascript code that includes an ANTLR-generated lexer and parser, and some associated syntax tree evaluation functionality. This code runs in the browser in my web app to support users who author code snippets which process scientific data. Now I'd like to do some additional background processing on the server using the same generated parser. I would prefer not to have to re-implement this stuff in C# and have multiple bits of code that did the exact same thing. Performance isn't as critical to me as eliminating duplication, since this is a background process. So, how can I call into my javascript code from C#? And how can I format my script so that it plays nicely with my .NET web app?

    Read the article

  • Select comma separated result from via comma separated parameter

    - by Rodney Vinyard
    Select comma separated result from via comma separated parameter PROCEDURE [dbo].[GetCommaSepStringsByCommaSepNumericIds] (@CommaSepNumericIds varchar(max))   AS   BEGIN   /* exec GetCommaSepStringsByCommaSepNumericIds '1xx1, 1xx2, 1xx3' */ DECLARE @returnCommaSepIds varchar(max); with cte as ( select distinct Left(qc.myString, 1) + '-' + substring(qc.myString, 2, 9) + '-' + substring(qc.myString, 11, 7) as myString from q_CoaRequestCompound qc               JOIN               dbo.SplitStringToNumberTable(@CommaSepNumericIds) AS s               ON               qc.q_CoaRequestId = s.ID where SUBSTRING(upper(myString), 1, 1) in('L', '?') ) SELECT @returnCommaSepIds = COALESCE(@returnCommaSepIds + ''',''', '''') + CAST(myString AS varchar(2x)) FROM cte;   set @returnCommaSepIds = @returnCommaSepIds + '''' SELECT @returnCommaSepIds   End   FUNCTION [dbo].[SplitStringToNumberTable] (        @commaSeparatedList varchar(max) ) RETURNS @outTable table (        ID int ) AS BEGIN        DECLARE @parsedItem varchar(10), @Pos int          SET @commaSeparatedList = LTRIM(RTRIM(@commaSeparatedList))+ ','        SET @commaSeparatedList = REPLACE(@commaSeparatedList, ' ', '')        SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)          IF REPLACE(@commaSeparatedList, ',', '') <> ''        BEGIN               WHILE @Pos > 0               BEGIN                      SET @parsedItem = LTRIM(RTRIM(LEFT(@commaSeparatedList, @Pos - 1)))                      IF @parsedItem <> ''                            BEGIN                                   INSERT INTO @outTable(ID)                                   VALUES (CAST(@parsedItem AS int)) --Use Appropriate conversion                            END                            SET @commaSeparatedList = RIGHT(@commaSeparatedList, LEN(@commaSeparatedList) - @Pos)                            SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)               END        END           RETURN END

    Read the article

  • Using Window Handle to disable Mouse clicks using c#

    - by srk
    I need to disable the Mouse Clicks, Mouse movement for a specific windows for a Kiosk application. Is it Feasible in C# ? I have removed the menu bar and title bar of a specific window, will that be a starting point to achieve the above requirement ? How can i achieve this requirement. The code for removing the menu bar and title bar using window handle : #region Constants //Finds a window by class name [DllImport("USER32.DLL")] public static extern IntPtr FindWindow(string lpClassName, string lpWindowName); //Sets a window to be a child window of another window [DllImport("USER32.DLL")] public static extern IntPtr SetParent(IntPtr hWndChild, IntPtr hWndNewParent); //Sets window attributes [DllImport("USER32.DLL")] public static extern int SetWindowLong(IntPtr hWnd, int nIndex, int dwNewLong); //Gets window attributes [DllImport("USER32.DLL")] public static extern int GetWindowLong(IntPtr hWnd, int nIndex); [DllImport("user32.dll", EntryPoint = "FindWindow", SetLastError = true)] static extern IntPtr FindWindowByCaption(IntPtr ZeroOnly, string lpWindowName); [DllImport("user32.dll")] static extern IntPtr GetMenu(IntPtr hWnd); [DllImport("user32.dll")] static extern int GetMenuItemCount(IntPtr hMenu); [DllImport("user32.dll")] static extern bool DrawMenuBar(IntPtr hWnd); [DllImport("user32.dll")] static extern bool RemoveMenu(IntPtr hMenu, uint uPosition, uint uFlags); //assorted constants needed public static uint MF_BYPOSITION = 0x400; public static uint MF_REMOVE = 0x1000; public static int GWL_STYLE = -16; public static int WS_CHILD = 0x40000000; //child window public static int WS_BORDER = 0x00800000; //window with border public static int WS_DLGFRAME = 0x00400000; //window with double border but no title public static int WS_CAPTION = WS_BORDER | WS_DLGFRAME; //window with a title bar public static int WS_SYSMENU = 0x00080000; //window menu #endregion public static void WindowsReStyle() { Process[] Procs = Process.GetProcesses(); foreach (Process proc in Procs) { if (proc.ProcessName.StartsWith("notepad")) { IntPtr pFoundWindow = proc.MainWindowHandle; int style = GetWindowLong(pFoundWindow, GWL_STYLE); //get menu IntPtr HMENU = GetMenu(proc.MainWindowHandle); //get item count int count = GetMenuItemCount(HMENU); //loop & remove for (int i = 0; i < count; i++) RemoveMenu(HMENU, 0, (MF_BYPOSITION | MF_REMOVE)); //force a redraw DrawMenuBar(proc.MainWindowHandle); SetWindowLong(pFoundWindow, GWL_STYLE, (style & ~WS_SYSMENU)); SetWindowLong(pFoundWindow, GWL_STYLE, (style & ~WS_CAPTION)); } } }

    Read the article

  • Speaking in St. Louis on June 14th

    - by Bill Graziano
    I’m going back to speak in St. Louis next month.  I didn’t make it last year and I’m looking forward to it.  You can find additional details on the St. Louis SQL Server user group web site.  The meeting will be held at the Microsoft office and I’ll be speaking at 1PM. I’ll be speaking on the procedure cache.  As people get better and better tuning queries this is the next major piece to understand.  We’ll talk about how and when query plans are reused.  The most common issue I see around odd query plans are stored procedures that use one query plan but the queries run completely different when you extract the SQL and hard code the parameters.  That’s just one of the common issues that I’ll address. There will be a second speaker after I’m done, then a short vendor presentation and a drawing for a netbook.

    Read the article

  • Why does mysqldump need to be fully pathed when called from a controller or model?

    - by Kris
    When I call mysqldump from a controller or model I need to fully path the binary, when I call it from Rake I don't need to. If I do not fully path I get a zero byte file... I can confirm both processes are run using the same user. # Works in a controller, model and Rake task system "/usr/local/mysql/bin/mysqldump -u root #{w.database_name} > #{target_file}" # Only works in a Rake task system "mysqldump -u root #{w.database_name} > #{target_file}" If I call the Rake task from the action it also fails (zero byte file). OS: Mac Ruby 1.8.6 EDIT: I use Etc.getpwuid(Process.uid).name to get the User of the current process

    Read the article

  • How to implement a memory transaction scope in C#?

    - by theburningmonk
    Hi, we have a cache which I would like to put some transaction scopes around so that any process have to explicitly 'commit' the changes it wants to do to the cached objects and make it possible to rollback any changes when the process fails halfway as well. Right now, we're deep cloning the cached objects on get requests, it works but it's not a clean solution and involves a fair bit of maintenance too. I remember hearing about some MTS (memory transaction scope) solution on .NetRocks a while back but can't remember the name of it! Does anyone know of a good MTS framework out there? Alternatively, if I was to implement my own, are there any good guidelines/patterns on how to do this? Thanks,

    Read the article

  • Transparent proxying - how to pass socket to local server without modification?

    - by Luca Farber
    Hello, I have a program that listens on port 443 and then redirects to either an SSH or HTTPS local server depending on the detected protocol. The program does this by connecting to the local server and proxying all data back and forth through its own process. However, this causes the originating host on the local servers to be logged as localhost. Is there any way to pass the socket directly to the local server process (rather than just making a new TCP connection) so that the parameters of sockaddr_in (or sockaddr_in6) will be retained? Platform for this is Linux.

    Read the article

  • ProcessCmdKey - wait for KeyUp?

    - by Tom Frey
    Hi, I'm having the following issue in a WinForms app. I'm trying to implement Hotkeys and I need to process Key messages whenever the control is active, no matter if the focus is on a textbox within that control, etc. Overriding ProcessCmdKey works beautifully for this and does exactly what I want with one exception: If a user presses a key and keeps it pressed, ProcessCmdKey keeps triggering WM_KEYDOWN events. However, what I want to achieve is that the user has to release the button again before another hotkey action would trigger (so, if someone sits on the keyboard it would not cause continuous hotkey events). However, I can't find where to catch WM_KEYUP events, so I can set a flag if it should process ProcessCmdKey messages again? Anyone can help out here? Thanks, Tom

    Read the article

  • Using Window Handle to disable Mouse clicks and Keyboard Inputs using c#

    - by srk
    I need to disable the Mouse Clicks, Mouse movement and Keyboard Inputs for a specific windows for a Kiosk application. Is it Feasible in C# ? I have removed the menu bar and title bar of a specific window, will that be a starting point to achieve the above requirement ? The code for removing the menu bar and title bar using window handle : #region Constants //Finds a window by class name [DllImport("USER32.DLL")] public static extern IntPtr FindWindow(string lpClassName, string lpWindowName); //Sets a window to be a child window of another window [DllImport("USER32.DLL")] public static extern IntPtr SetParent(IntPtr hWndChild, IntPtr hWndNewParent); //Sets window attributes [DllImport("USER32.DLL")] public static extern int SetWindowLong(IntPtr hWnd, int nIndex, int dwNewLong); //Gets window attributes [DllImport("USER32.DLL")] public static extern int GetWindowLong(IntPtr hWnd, int nIndex); [DllImport("user32.dll", EntryPoint = "FindWindow", SetLastError = true)] static extern IntPtr FindWindowByCaption(IntPtr ZeroOnly, string lpWindowName); [DllImport("user32.dll")] static extern IntPtr GetMenu(IntPtr hWnd); [DllImport("user32.dll")] static extern int GetMenuItemCount(IntPtr hMenu); [DllImport("user32.dll")] static extern bool DrawMenuBar(IntPtr hWnd); [DllImport("user32.dll")] static extern bool RemoveMenu(IntPtr hMenu, uint uPosition, uint uFlags); //assorted constants needed public static uint MF_BYPOSITION = 0x400; public static uint MF_REMOVE = 0x1000; public static int GWL_STYLE = -16; public static int WS_CHILD = 0x40000000; //child window public static int WS_BORDER = 0x00800000; //window with border public static int WS_DLGFRAME = 0x00400000; //window with double border but no title public static int WS_CAPTION = WS_BORDER | WS_DLGFRAME; //window with a title bar public static int WS_SYSMENU = 0x00080000; //window menu #endregion public static void WindowsReStyle() { Process[] Procs = Process.GetProcesses(); foreach (Process proc in Procs) { if (proc.ProcessName.StartsWith("notepad")) { IntPtr pFoundWindow = proc.MainWindowHandle; int style = GetWindowLong(pFoundWindow, GWL_STYLE); //get menu IntPtr HMENU = GetMenu(proc.MainWindowHandle); //get item count int count = GetMenuItemCount(HMENU); //loop & remove for (int i = 0; i < count; i++) RemoveMenu(HMENU, 0, (MF_BYPOSITION | MF_REMOVE)); //force a redraw DrawMenuBar(proc.MainWindowHandle); SetWindowLong(pFoundWindow, GWL_STYLE, (style & ~WS_SYSMENU)); SetWindowLong(pFoundWindow, GWL_STYLE, (style & ~WS_CAPTION)); } } }

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How do I synchronize access to shared memory in LynxOS/POSIX?

    - by GrahamS
    I am implementing two processes on a LynxOS SE (POSIX conformant) system that will communicate via shared memory. One process will act as a "producer" and the other a "consumer". In a multi-threaded system my approach to this would be to use a mutex and condvar (condition variable) pair, with the consumer waiting on the condvar (with pthread_cond_wait) and the producer signalling it (with pthread_cond_signal) when the shared memory is updated. How do I achieve this in a multi-process, rather than multi-threaded, architecture? Is there a LynxOS/POSIX way to create a condvar/mutex pair that can be used between processes? Or is some other synchronization mechanism more appropriate in this scenario?

    Read the article

  • Firefox add-on tab-specific buttons and scripts, similar to Page Actions in Google Chrome

    - by Chetan
    I want to write a Firefox extension that acts exactly like the built-in RSS feed scanner (as an exercise). It should do the following: On each new page / tab load, it should scan the content of the page for RSS feeds If there are RSS feeds in the page, it should put a button in the location bar that the user can click On clicking the button, a speech bubble should appear under the button (the way a speech bubble appears under the bookmarks star when you click on it), with information on the feeds and buttons to subscribe to them So my main questions are: What is the process to run specific content scripts for specific pages? What is the process to use the results of those scripts to update the speech bubble for each location bar button for each tab? Basically, I'm trying to figure out how to do in Firefox what Page Actions are in Google Chrome. Please help! :)

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Producer/consumer in Grails?

    - by Mulone
    Hi all, I'm trying to implement a Consumer/Producer app in Grails, after several unsuccessful attempts at implementing concurrent threads. Basically I want to store all the events coming from a clients (through separate AJAX calls) in a single queue and then process such a queue in a linear way as soon as new events are added. This looks like a Producer/Consumer problem: http://en.wikipedia.org/wiki/Producer-consumer_problem How can I implement this in Grails (maybe with a timer or even better by generating an event 'process queue')? Basically I'd like to a have a singleton service waiting for new events in the queue and processing them linearly (even if the queue is loaded by several concurrent processes). Any hints? Cheers!

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >