Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 420/943 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • ExecutionException and InterruptedException while using Future class's get() method

    - by java_geek
    ExecutorService executor = Executors.newSingleThreadExecutor(); try { Task t = new Task(response,inputToPass,pTypes,unit.getInstance(),methodName,unit.getUnitKey()); Future<SCCallOutResponse> fut = executor.submit(t); response = fut.get(unit.getTimeOut(),TimeUnit.MILLISECONDS); } catch (TimeoutException e) { // if the task is still running, a TimeOutException will occur while fut.get() cat.error("Unit " + unit.getUnitKey() + " Timed Out"); response.setVote(SCCallOutConsts.TIMEOUT); } catch (InterruptedException e) { cat.error(e); } catch (ExecutionException e) { cat.error(e); } finally { executor.shutdown(); } } How should i handle the InterruptedException and ExecutionException in the code? And in what cases are these exceptions thrown?

    Read the article

  • Cann Boost Program_options separate comma separated argument values

    - by lrm
    If my command line is: > prog --mylist=a,b,c Can Boost's program_options be setup to see three distinct argument values for the mylist argument? I have configured program_options as: namespace po = boost::program_options; po::options_description opts("blah") opts.add_options() ("mylist", std::vector<std::string>>()->multitoken, "description"); po::variables_map vm; po::store(po::parse_command_line(argc, argv, opts), vm); po::notify(vm); When I check the value of the mylist argument, I see one value as a,b,c. I'd like to see three distinct values, split on comma. This works fine if I specify the command line as: > prog --mylist=a b c or > prog --mylist=a --mylist=b --mylist=c Is there a way to configure program_options so that it sees a,b,c as three values that should each be inserted into the vector, rather than one? I am using boost 1.41, g++ 4.5.0 20100520, and have enabled c++0x experimental extensions.

    Read the article

  • IndexOutofRangeException while using WriteLine in nested Parallel.For loops

    - by Umar Asif
    I am trying to write kinect depth data to a text file using nested Parallel.For loops with the following code. However, it gives IndexOutofRangeException. The code works perfect if using simple for loops but it hangs the UI since the depth format is set to 640x480 causing the loops to write 307200 lines in the text file at 30fps. Therefore, I switched to Parallel. For scheme. If I omit the writeLine command from the nested loops, the code works fine, which indicates that the IndexOutofRangeException is arising at the writeline command. I do not know how to troubleshoot this. Please advise. Any better workarounds to avoid UI freezing? Thanks. using (DepthImageFrame depthImageframe = d.OpenDepthImageFrame()) { if (depthImageframe == null) return; depthImageframe.CopyPixelDataTo(depthPixelData); swDepth = new StreamWriter(@"E:\depthData.txt", false); int i = 0; Parallel.For(0, depthImageframe.Width, delegate(int x) { Parallel.For(0, depthImageframe.Height, delegate(int y) { p[i] = sensor.MapDepthToSkeletonPoint(depthImageframe.Format, x, y, depthPixelData[x + depthImageframe.Width * y]); swDepth.WriteLine(i + "," + p[k].X + "," + p[k].Y + "," + p[k].Z); i++; }); }); swDepth.Close(); } }

    Read the article

  • Dynamically add items to Tkinter Canvas

    - by nick369
    I'm attempting to learn Tkinter with the goal of being able to create a 'real-time' scope to plot data. As a test, I'm trying to draw a polygon on the canvas every time the 'draw' button is pressed. The triangle position is randomized. I have two problems: There is a triangle on the canvas as soon as the program starts, why and how do I fix this? It doesn't draw any triangles when I press the button, at least none that I can see. CODE from Tkinter import * from random import randint class App: def __init__(self,master): #frame = Frame(master) #frame.pack(side = LEFT) self.plotspc = Canvas(master,height = 100, width = 200, bg = "white") self.plotspc.grid(row=0,column = 2, rowspan = 5) self.button = Button(master, text = "Quit", fg = "red", \ command = master.quit) self.button.grid(row=0,column=0) self.drawbutton = Button(master, text = "Draw", command = \ self.pt([50,50])) self.drawbutton.grid(row = 0, column = 1) def pt(self, coords): coords[0] = coords[0] + randint(-20,20) coords[1] = coords[1] + randint(-20,20) x = (0,5,10) y = (0,10,0) xp = [coords[0] + xv for xv in x] yp = [coords[1] + yv for yv in y] ptf = zip(xp,yp) self.plotspc.create_polygon(*ptf) if _name_ == "_main_": root = Tk() app = App(root) root.mainloop() The code is formatting strangely within the code tags, I have no idea how to fix this.

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • Getting table schema from a query

    - by Appu
    As per MSDN, SqlDataReader.GetSchemaTable returns column metadata for the query executed. I am wondering is there a similar method that will give table metadata for the given query? I mean what tables are involved and what aliases it has got. In my application, I get the query and I need to append the where clause programically. Using GetSchemaTable(), I can get the column metadata and the table it belongs to. But even though table has aliases, it still return the real table name. Is there a way to get the aliase name for that table? Following code shows getting the column metadata. const string connectionString = "your_connection_string"; string sql = "select c.id as s,c.firstname from contact as c"; using(SqlConnection connection = new SqlConnection(connectionString)) using(SqlCommand command = new SqlCommand(sql, connection)) { connection.Open(); SqlDataReader reader = command.ExecuteReader(CommandBehavior.KeyInfo); DataTable schema = reader.GetSchemaTable(); foreach (DataRow row in schema.Rows) { foreach (DataColumn column in schema.Columns) { Console.WriteLine(column.ColumnName + " = " + row[column]); } Console.WriteLine("----------------------------------------"); } Console.Read(); } This will give me details of columns correctly. But when I see BaseTableName for column Id, it is giving contact rather than the alias name c. Is there any way to get the table schema and aliases from a query like the above? Any help would be great!

    Read the article

  • Telnet SMTP with expect or shell script

    - by Fendrix
    Want to build up a Auth Smtp Connection with expect script... just to test I wanted to get ehlo parameters but expect is not working like this #!/usr/bin/expect set timeout -1 set smtp [lindex $argv 0] set port [lindex $argv 1] spawn telnet $smtp $port expect "[2]{2,}[0]{1,}" send "ehlo" I expect the code 220 to come from mailserver to continue to send ehlo ... just like ..../...:telnet smtp.mail.yahoo.de 25 Trying 77.238.184.85... Connected to smtp2-de.mail.vip.ukl.yahoo.com. Escape character is '^]'. 220 smtp116.mail.ukl.yahoo.com ESMTP ehlo 250-smtp116.mail.ukl.yahoo.com 250-AUTH LOGIN PLAIN XYMCOOKIE 250-PIPELINING 250-SIZE 41697280 250 8BITMIME

    Read the article

  • Windows Service Installation

    - by Goober
    Scenario I have a server, that has NO Visual Studio Installed. It literally has a normal command prompt and nothing installed yet. We don't want to install anything (except the .Net framework which we have already done). We just want to install a bunch of C# Windows Services that we have written. So far I have been installing and running the windows service on my local machine using a "setup and deploy" project that I built into the application, which I could then use to install the service locally. Question How can I install the service on the server? I imagine it can be done from the command prompt only, but what else do I need? - If anything? and where do I put the files that I want to install BEFORE I install them? I imagine I will have to compile the application on my local machine in Visual Studio, then copy it over to the server, and then run an install utility to install it on the server? Any help would be greatly appreciated.

    Read the article

  • How to replace pairs of strings in two files to identical IDs?

    - by Péter Török
    Sorry if the title is not very intelligible, I couldn't come up with anything better. Hopefully my explanation is clear enough: I have a pair of rather large log files with very similar content, except that some strings are different between the two. A couple of examples: UnifiedClassLoader3@19518cc | UnifiedClassLoader3@d0357a JBossRMIClassLoader@13c2d7f | JBossRMIClassLoader@191777e That is, wherever the first file contains UnifiedClassLoader3@19518cc, the second contains UnifiedClassLoader3@d0357a, and so on. [Update] There are about 40 distinct pairs of such identifiers.[/Update] I want to replace these with identical IDs so that I can spot the really important differences between the two files. I.e. I want to replace all occurrences of both UnifiedClassLoader3@19518cc in file1 and UnifiedClassLoader3@d0357a in file2 with UnifiedClassLoader3@1; all occurrences of both JBossRMIClassLoader@13c2d7f in file1 and JBossRMIClassLoader@191777e in file2 with JBossRMIClassLoader@2 etc. Using the Cygwin shell, so far I managed to list all different identifiers occurring in one of the files with grep -o -e 'ClassLoader[0-9]*@[0-9a-f][0-9a-f]*' file1.log | sort | uniq However, now the original order is lost, so I don't know which is the pair of which ID in the other file. With grep -n I can get the line number, so the sort would preserve the order of appearance, but then I can't weed out the duplicate occurrences. Unfortunately grep can not print only the first match of a pattern. I figured I could save the list of identifiers produced by the above command into a file, then iterate over the patterns in the file with grep -n | head -n 1, concatenate the results and sort them again. The result would be something like 2 ClassLoader3@19518cc 137 ClassLoader@13c2d7f 563 ClassLoader3@1267649 ... Then I could (either manually or with sed itself) massage this into a sed command like sed -e 's/ClassLoader3@19518cc/ClassLoader3@2/g' -e 's/ClassLoader@13c2d7f/ClassLoader@137/g' -e 's/ClassLoader3@1267649/ClassLoader3@563/g' file1.log > file1_processed.log and similarly for file2. However, before I start, I would like to verify that my plan is the simplest possible working solution to this. Is there any flaw in this approach? Is there a simpler way?

    Read the article

  • Newb Question: scanf() in C

    - by riemannliness
    So I started learning C today, and as an exercise i was told to write a program that asks the user for numbers until they type a 0, then adds the even ones and the odd ones together. Here is is (don't laugh at my bad style): #include <stdio.h>; int main() { int esum = 0, osum = 0; int n, mod; puts("Please enter some numbers, 0 to terminate:"); scanf("%d", &n); while (n != 0) { mod = n % 2; switch(mod) { case 0: esum += n; break; case 1: osum += n; } scanf("%d", &n); } printf("The sum of evens:%d,\t The sum of odds:%d", esum, osum); return 0; } My question concerns the mechanics of the scanf() function. It seems that when you enter several numbers at once separated by spaces (eg. 1 22 34 2 8), the scanf() function somehow remembers each distinct numbers in the line, and steps through the while loop for each one respectively. Why/how does this happen? Example interaction within command prompt: - Please enter some numbers, 0 to terminate: 42 8 77 23 11 (enter) 0 (enter) - The sum of evens:50, The sum of odds:111 I'm running the program through the command prompt, it's compiled for win32 platforms with visual studio.

    Read the article

  • Unable to connect to sql database only from asp

    - by Brian
    In a VB6 program: Dim conn As Object Set conn = CreateObject("ADODB.Connection") conn.Open "DRIVER={SQL Server}; Server=(local)\aaa; Database=bbb; UID=ccc; PWD=ddd" In an ASP program: Sub ProcessSqlServer(conn) Set conn = Server.CreateObject("ADODB.Connection") conn.Open "DRIVER={SQL Server}; Server=(local)\aaa; Database=bbb; UID=ccc; PWD=ddd" The VB6 program works, the ASP program does not (see error below). I tried checking the event log for errors, but found nothing. Or more precisely, I did find local an activation permission error, but this was fixed once I added local launch/activation permission for Network Service to the Machine Debug Manager via the component services tool. Error: Microsoft OLE DB Provider for ODBC Drivers error '80004005' [Microsoft][ODBC SQL Server Driver]Timeout expired

    Read the article

  • What mutex/locking/waiting mechanism to use when writing a Chat application with Tornado Web Framewo

    - by user272973
    We're implementing a Chat server using Tornado. The premise is simple, a user makes open an HTTP ajax connection to the Tornado server, and the Tornado server answers only when a new message appears in the chat-room. Whenever the connection closes, regardless if a new message came in or an error/timeout occurred, the client reopens the connection. Looking at Tornado, the question arises of what library can we use to allow us to have these calls wait on some central object that would signal them - A_NEW_MESSAGE_HAS_ARRIVED_ITS_TIME_TO_SEND_BACK_SOME_DATA. To describe this in Win32 terms, each async call would be represented as a thread that would be hanging on a WaitForSingleObject(...) on some central Mutex/Event/etc. We will be operating in a standard Python environment (Tornado), is there something built-in we can use, do we need an external library/server, is there something Tornado recommends? Thanks

    Read the article

  • Gacutil.exe successfully adds assembly, but assembly missing from GAC. Why?

    - by Ben McCormack
    I'm running GacUtil.exe from within Visual Studio Command Prompt 2010 to register a dll (CatalogPromotion.dll) to the GAC. After running the utility, it says Assembly Successfully added to the cache, and running gacutil /l CatalogPromotionDll shows that the GAC contains the assembly, but I can't see the assembly when I navigate to C:\WINDOWS\assembly from Windows Explorer. Why can't I see the assembly in WINDOWS\assembly from Windows Explorer but I can see it using gacutil.exe? Background: Here's what I typed into the command prompt for VS Tools: C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /i CatalogPromotionDll.dll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Assembly successfully added to the cache C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /l CatalogPromotionDll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: CatalogPromotionDll, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9188a175 f199de4a, processorArchitecture=MSIL Number of items = 1 However, the assembly doesn't show up in C:\WINDOWS\assembly.

    Read the article

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

  • ADODB.Connection undefined

    - by Wes Groleau
    Reference http://stackoverflow.com/questions/1690622/excel-vba-to-sql-server-without-ssis After I got the above working, I copied all the global variables/constants from the routine, which included Const CS As String = "Driver={SQL Server};" _ & "Server=**;" _ & "Database=**;" _ & "UID=**;" _ & "PWD=**" Dim DB_Conn As ADODB.Connection Dim Command As ADODB.Command Dim DB_Status As Stringinto a similar module in another spreadsheet. I also copied Sub Connect_To_Lockbox() If DB_Status < "Open" Then Set DB_Conn = New Connection DB_Conn.ConnectionString = CS DB_Conn.Open ' problem! DB_Status = "Open" End If End SubI added the same reference (ADO 2.8) The first spreadsheet still works; the seccond at DB_Conn.Open pops up "Run-time error '-214767259 (80004005)': [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified" Removing the references on both, saving files, re-opening, re-adding the references doesn't help. The one still works and the other gets the error. ?!?

    Read the article

  • Ruby on Rails Mongrel web server stuck when MySQL service is running

    - by Marcos Buarque
    Hi, I am a Ruby on Rails newbie and already have a problem. I have started the Mongrel web server and it works fine when MySQL service isn't running. But when MySQL is on, Mongrel stucks. It ceases from serving the pages. So far, I have tested the localhost:3000 URL. When MySQL is off, it serves the page. When I click "about application's environment", I get the messasge (of course) "Can't connect to MySQL server on 'localhost' (10061)". After starting the MySQL service and refreshing, I get no more answer and Mongrel does not serve the webpage. It gets stuck with no answer to the browser. Then I have to stop the webserver and restart it. I have installed mysql2 gem with the command gem install mysql2. I was able to create the _test and _development databases with the command line rake db:create. I have tested with MySQL root user and blank password and also tried with a superuser user I have created. No success. Here is the server log: ======================== Started GET "/rails/info/properties" for 127.0.0.1 at Fri Dec 24 17:41:25 -0200 2010 Mysql2::Error (Can't connect to MySQL server on 'localhost' (10061)): Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (5.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (35.0ms) ================= I am running on a Windows 7 environment with firewall down.

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • How do I test how my client program will react to a web server error?

    - by Brad Bruce
    I have a client program based on LibCurl. I have run into a situation where on some occasions a program on the server runs longer than the configured IIS timeout. IIS then terminates the program and returns a 502 error status to the client. I have added code to the client to capture this issue. Now, I need to find a way to prove that the change will work. I haven't been able to reliably reproduce the issue, so don't have a good test case. Any suggestions?

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

  • (2006, 'MySQL server has gone away') in WSGI django

    - by Stefano Borini
    I have a MySQL gone away with Django under WSGI. I found entries for this problem on stackoverflow, but nothing with Django specifically. Google does not help, except for workarounds (like polling the website every once in a while, or increasing the database timeout). Nothing definitive. Technically, Django and/or MySQLdb (I'm using the latest 1.2.3c1) should attempt a reconnect if the server hanged the connection, but this does not happen. How can I solve this issue without workarounds ?

    Read the article

  • Whatfor Visual Studio?! ml, cl, and link exe-cutables would suffice

    - by AntonIO
    It says in /library article /9s7c9wdw : "You can start this tool [cl.exe] only from the Visual Studio command prompt. You cannot start it from a system command prompt or from Windows Explorer." The corresponding (v=VS.80) page geared towards Visual Studio 2005 makes no such mention. Moreover, there is this Q&A. Thing is: Why should anybody spend anything on VS? ml is provided free of charge- necessarily so since it poses no value addition. The combined size of the other two is 895kb. Uncompressed. The GUI is a disservice. I myself have found half a dozen bugs. However, if the above is true, you'd need the IDE. MSFT fanboys, please step up. Background is that I have the 2008 Pro ed. The official Firefox builds use VS 2005 which I have on another system. To me no redundancy is acceptable. That's when I started pondering about boiling down VS and merely copying over the essential binaries. Then extended the thought to synthetically updating V$.

    Read the article

  • Overriding content_type for Rails Paperclip plugin

    - by Fotios
    I think I have a bit of a chicken and egg problem. I would like to set the content_type of a file uploaded via Paperclip. The problem is that the default content_type is only based on extension, but I'd like to base it on another module. I seem to be able to set the content_type with the before_post_process class Upload < ActiveRecord::Base has_attached_file :upload before_post_process :foo def foo logger.debug "Changing content_type" #This works self.upload.instance_write(:content_type,"foobar") # This fails because the file does not actually exist yet self.upload.instance_write(:content_type,file_type(self.upload.path) end # Returns the filetype based on file command (assume it works) def file_type(path) return `file -ib '#{path}'`.split(/;/)[0] end end But...I cannot base the content type on the file because Paperclip doesn't write the file until after_create. And I cannot seem to set the content_type after it has been saved or with the after_create callback (even back in the controller) So I would like to know if I can somehow get access to the actual file object (assume there are no processors doing anything to the original file) before it is saved, so that I can run the file_type command on that. Or is there a way to modify the content_type after the objects have been created.

    Read the article

  • Windows Mobile camera capture issue with DirectShow

    - by user1112111
    I wrote a C# application that uses DirectShow to allow users to take photos with the camera attached to a Windows Mobile 6 phone. The problem I spotted out is that the preview functionality stops working after a period of not using it. How could I determine what's causing this ? Is there a timeout in Windows registry that tells how much time the camera should stay on ? This is very strange as my code runs perfect when turning on/off the camera quickly enough (i.e. not letting the camera off more than ~15 seconds).

    Read the article

  • Cherokee rule failover

    - by phretor
    I have a Cherokee virtual host configured as follows: 1st rule: "Directory /" -> HTTP Reverse Proxy 2nd rule: "Directory /" -> uWSGI I know the second rule is useless because it's never triggered. However, the first rule seldom returns a 504 Gateway Timeout error so I was thinking of failing over the second rule, yet I don't know how to achieve this. Is it possible? Unfortunately, I cannot use only one rule with load balanced information sources, because I use two different types of information sources (i.e., pure HTTP and WSGI).

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >