Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 267/973 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • How can I get elements out of an array with Template Toolkit?

    - by Przemek
    I have an array of Paths which i want to read out with Template Toolkit. How can I access the array Elements of this array? The Situation is this: my @dirs; opendir(DIR,'./directory/') || die $!; @dirs = readdir(DIR); close DIR; $vars->{'Tree'} = @dirs; Then I call the Template Page like this: $template->process('create.tmpl', $vars) || die "Template process failed: ", $template->error(), "\n"; In this template I want to make an Tree of the directories in the array. How can I access them? My idea was to start with a foreach in the template like this [% FOREACH dir IN Tree.dirs %] $dir [% END %]

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Winnipeg SQL Server UG April Event &ndash; How To Do An Index Review

    - by D'Arcy Lussier
    April Event - How to Do an Index Review April 14th, 2010 5:30 - 8:00 17th Floor Conference Room, Richardson Building One Lombard Place, Winnipeg Pizza and Drinks Provided! Did you know that SQL Server 2005+ keeps query execution statistics, index usage statistics and even missing index statistics?  Learn how to access this information and use it to help you make good decisions about what your database really needs in terms of indexes in a lot less time than you might think an index review should take.  There are 6 or 7 (depending on your version of SQL server) DMVs (dynamic management views) to look at which reveal a lot about your database and how you can improve its performance. To register for this event, please click HERE to register!

    Read the article

  • Java Backgroundworker: Scope of Widget to be updated unclear

    - by erlord
    Hi all, I am trying to understand the mechanism of org.jdesktop.swingx.BackgroundWorker. Their javadoc presents following example: final JLabel label; class MeaningOfLifeFinder implements BackgroundListener { public void doInBackground(BackgroundEvent evt) { String meaningOfLife = findTheMeaningOfLife(); evt.getWorker().publish(meaningOfLife); } public void process(BackgroundEvent evt) { label.setText("" + evt.getData()); } public void done(BackgroundEvent evt) {} public void started(BackgroundEvent evt) {} } (new MeaningOfLifeFinder()).execute(); Apart from the fact that I doubt the result will ever get published, I wonder how label is passed to the process method, where it is being updated. I thought it's scope is limited to the outside of the BackgroudListener implementation. Quite confused I am ... any answers for me? Thanks in advance

    Read the article

  • Google Checkout, OpenId, and downloadable products

    - by craigmoliver
    I going to use Google Checkout to process orders to purchase downloadable content. When the order process is completed via Google Checkout I'd like for the user to be able come back to my site, authenticate using their Google credentials (OpenID?) that they purchased the item with linked back end, and download the goods. The site is written using C# and ASP.NET MVC. Is this possible or how should I rethink this? Are there open-source libraries to get me started?

    Read the article

  • Commit into TortoiseSVN

    - by pratap
    hello, <exec executable="tortoiseproc.exe"> <baseDirectory>C:\Program Files\TortoiseSVN\bin</baseDirectory> <buildArgs>/command:commit /path:\******\trunk\dotnet /notempfile /closeonend</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> the code above pops up a window asking for "entering a message, selecting the changed content and then clicking OK and again clicking OK again after the process completes" I would be extremely thankful if anyone can suggest how to avoid the above said process if commit is done using cruise control (config file). thanks. pratap

    Read the article

  • Avoiding exceptions when uploading files in laravel

    - by occam98
    I've got a file upload field (attachment1) in a form that may or may not have a file uploaded in it when I process the form in laravel. When I'm trying to process the page, this line generates an exception: Input::upload('attachment1',path('storage').'attachments/'.$name); Here is the text of the exception: Message: Call to a member function move() on a non-object it seems that I need to check in advance to see if 'attachment1' has a file, and I found that the function Input::has_file('attachment1') is supposed to tell me whether or not 'attachment1' has a file, but even when I submit an empty form, it returns true. Also, from reading documentation, it seems that Input::upload is supposed to just return false when trying to upload a non-existant file, so why does it produce this exception instead, and how can I fix it?

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

  • Automate Testing on future only items business rules

    - by Titan
    I currently have a business object with a validation business rule, which is it can only be created for the future, tomorrow onwards, and I cannot create new items for today. I have a process, which runs the non-future business objects through some steps.. Because I have to set things up today, and test tomorrow, and when it fails, I can only create a new object tomorrow and test the following day. Are there any easy ways to automate this process in any testing frameworks? I think our testers are using the visual studio 2010 test manager. How do you guys manage situations like this? Cheers

    Read the article

  • What are the 'must know' GDB commands?

    - by Chris Smith
    I'm starting to get the hang of GDB, but everything still feels much slower than when debugging in Eclipse or Visual Studio. Are there any GDB commands you find particularly useful/productive? My life became dramatically better when I discovered: list - Display source code near the current instruction But that is still pretty basic. (And unnecessary when running GDB from Emacs.) Is there any way to do things like setup a watch window? (Print and update the result of an expression every time execution stops.)

    Read the article

  • In-Proc SxS opens for shell extension in managed code?

    - by Jens Granlund
    The recommendation used to be "Do not write in-process shell extensions in managed code." But with .NET Framework 4 and In-Process Side-by-Side the main reason not to write shell extensions in managed code should be resolved. With that said, I have three questions. Is it now okay to write shell extensions in managed code? Which problems, if any might there be with writing shell extensions in managed code? What reasons might there be to write shell extensions in unmanaged code?

    Read the article

  • How do I indicate success and failure with colour?

    - by Steve McLeod
    I need to make a Java component that turns the background a certain colour when a process passed, and another colour when the process failed. My first thought was: green for success, red for failure. But then I read that 10% of males can't differentiate between these two colours. What would be a better combination of colours? (For the nitpickers: yes, I know that colour alone doesn't suffice, that text, shape, and noise can also be used. Nevertheless I am asking about the appropriate use of colour.)

    Read the article

  • msmq binding wcf

    - by pdiddy
    I have some messages in my queue. Now I notice that after 3 tries the service host faults. Is this a normal behavior? Where does the 3 times comes from? I thought it came from receiveRetryCount. But I set that one to 1. I got 20 messages in my queue waiting to be processed. The WCF operation that is responsible to process the message supports transaction so if it can't process the message it will throw so that the message stays in the queue. I didn't think that it would of Fault the ServiceHost after a number of retry, is this part documented somewhere? I'm running MSMQ service on my winxp machine. I'm more interested in documentation indicating that the service host will fault after a number of retry. Is this part true?

    Read the article

  • Control Windows VM from Linux Host

    - by vy32
    I am looking for a tool that will allow me to monitor and control programs running inside a Windows VM from the Linux host machine. I realize that this is similar to what a rootkit would do, and I am completely happy to use some hacker software if it provides the necessary functionality (and if I can get it in source-code form). If I can't find something, I'll have to write it using C. Probably an embedded HTTP server running on an odd port and doing some kind of XMLRPC thing. Here is the basic functionality I need: Get list of running processes Kill a process. Start a process Read/write/create/delete files I would like to: - Read contents of screen - Read all controls on screen. - Send arbitrary click to a Windows control. Does anything like this exist?

    Read the article

  • Init modules in apache2

    - by user306963
    Hello, I used to write apache modules in apache 1.3, but these days I am willing to pass to apache2. The module that I am writing at the moment has is own binary data, not a database, for performance purposes. I need to load this data in shared memory, so every child can access it without making his own copy, and it would be practical to load/create the binary data at startup, as I was used to do with apache 1.3. Problem is that I don't find an init event in apache2, in 1.3 in the module struct, immediatly after STANDARD_MODULE_STUFF you find a place for a /** module initializer */, in which you can put a function that will be executed early. Body of the function I used to write is something like: if ( getppid == 1 ) { // Load global data here // this is the parent process void* data = loadGlobalData( someFilePath ); setGlobalData( config, data ); } else { // this is the init of a child process // do nothing } I am looking for a place in apache2 in where I can put a similar function. Can you help? Thanks Benvenuto

    Read the article

  • Linux ext3 readdir and concurrent updates

    - by Wangnick
    Dear all, we are receiving about 10000 messages per hour. We store them as individual files in hourly directories on an ext3 filesystem. The file name includes a sequence number. We use rsync to mirror these files every 20 seconds at another location (via a SAN, but that doesn't matter). Sometimes an rsync run picks up files n-3, n-2, n-1, n+1, and then next rsync run continues with n, n+2, n+3, n+4 and so on. Is it possible that when one process creates files in a certain sequence within a directory, that another process using readdir() sees the files appearing in a different sequence? Kind regards, Sebastian

    Read the article

  • Database on the fly with scripting languages

    - by afilatun
    I have a set of .csv files that I want to process. It would be far easier to process it with SQL queries. I wonder if there is some way to load a .csv file and use SQL language to look into it with a scripting language like python or ruby. Loading it with something similar to ActiveRecord would be awesome. The problem is that I don't want to have to run a database somewhere prior to running my script. I souldn't have additionnal installations needed outside of the scripting language and some modules. My question is which language and what modules should I use for this task. I looked around and can't find anything that suits my need. Is it even possible?

    Read the article

  • Microsoft Detours

    - by Bruce
    I am new to Microsoft Detours. I have installed it to trace the system calls a process makes. I run the following commands which I got from the web syelogd.exe /q C:\Users\xxx\Desktop\log.txt withdll.exe /d:traceapi.dll C:\Program Files\Google\Google Talk\googletalk.exe I get the log file. The problem is I don't fully understand what is happening here. How does detours work? How does it trace the system calls? Also I don't know how to read the output in log.txt. Here is one line in log.txt 20101221060413329 2912 50.60: traceapi: 001 GetCurrentThreadId() Finally I want to get the stack trace of the process. How can I get that?

    Read the article

  • Multiple merchant accounts with Activemerchant gem.

    - by sosborn
    I am developing a rails site that will allow a group of merchants (5 - 10) to accept credit card orders online. I plan on using the Activemerchant gem to handle the processing. In this case, each merchant will have their own merchant accounts to handle the payments. Storing banking information like that is not something I am a fan of. This could be solved by queing orders and allowing the merchant to log in to the site, input their credentials and process the order. However, if I go that route then it seems to me that I would have to store the customers' credit card information temporarily until the merchant has the opportunity to log in and process the order, which to me is the greater evil. Has anyone dealt with this situation? If so, what are the options available and what pitfalls should I look out for? In my mind, security customer credit card information is priority number one with the merchant account information a close second.

    Read the article

  • What happens when you run out of ram with mlockall set?

    - by James Dean
    I am working on a C++ application that requires a large amounts of memory for a batch run. ( 20gb) Some of my customers are running into memory limits where sometimes the OS starts swapping and the total run time doubles or worse. I have read that I can use the mlockall to keep the process from being swapped out. What would happen when the process memory requirements approaches or exceeds the the available physical memory in this way? I guess the answer might be OS specific so please list the OS in your answer.

    Read the article

  • Multiple Socket Connections

    - by BSchlinker
    I need to write a server which accepts connections from multiple client machines, maintains track of connected clients and sends individual clients data as necessary. Sometimes, all clients may be contacted at once with the same message, other times, it may be one individual client or a group of clients. Since I need confirmation that the clients received the information and don't want to build an ACK structure for a UDP connection, I decided to use a TCP streaming method. However, I've been struggling to understand how to maintain multiple connections and keep them idle. I seem to have three options. Use a fork for each incoming connection to create a separate child process, use pthread_create to create an entire new thread for each process, or use select() to wait on all open socket IDs for a connection. Recommendations as to how to attack this? I've begun working with pthreads but since performance will likely not be an issue, multicore processing is not necessary and perhaps there is a simpler way.

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • MYSQL Event to update another database table

    - by Lee
    Hey All, I have just taken over a project for a client and the database schema is in a total mess. I would like to rename a load of fields make it a relationship database. But doing this will be a painstaking process as they have an API running of it also. So the idea would be to create a new database and start re-writing the code to use this instead. But I need a way to keep these tables in sync during this process. Would you agree that I should use MYSQL EVENT's to keep updating the new table on Inserts / updates & deletes?? Or can you suggest a better way?? Hope you can advise !! thanks for any input I get

    Read the article

  • Top Innovations for Sales Managers

    - by divya.malik
    Sales managers are always looking for ways to motivate their troops as well as make themselves more effective and productive. Here is a small X’mas present for those folks that are looking for some effective tips. Our friends at Selling Power magazine recently wrote an interesting blog post with top 10 best practices for sales managers. Here we go: Harness social media Strategically align marketing campaigns with sales efforts Establish a customer-centric sales process Realize ROI with CRM Embrace online collaboration Improve accuracy in sales forecasting and pipeline metrics Coach for sales success Leverage mobile technology Focus on sales enablement Improve sales performance and compensation management We have a complete suite of sales applications, to help increase sales revenues, sales productivity as well as to improve your sales execution. You can find more details here. For more details on the SellingPower blog post click here. Happy Holidays to you and your family.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >