Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 677/1030 | < Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >

  • C++'s char * by swig got problem in Python 3.0

    - by gpliu3
    Our C++ lib works fine with Python2.4 using Swig, returning a C++ char* back to a python str. But this solution hit problem in Python3.0, error is: Exception=(, UnicodeDecodeError('utf8', b"\xb6\x9d\xa.....",0, 1, 'unexpected code byte') Our definition is like(working fine in Python 2.4): void cGetPubModulus( void* pSslRsa, char* cMod, int* nLen ); %include "cstring.i" %cstring_output_withsize( char* cMod, int* nLen ); Suspect swig is doing a Bytes-Str conversion automatically. In python2.4 it can be implicit but in Python3.0 it's no long allowed.. Anyone got a good idea? thanks

    Read the article

  • Zend Framework: how to remove rendered views in the controller?

    - by takpar
    i want to render one of these sets of views: head body-$id1 foot OR head body-$id2 foot which set exsists. i do it like this: try { $this->render("head"); $this->render("body-$id1"); $this->render("foot"); } catch (Exception $e) { $this->render("head"); $this->render("body-$id2"); $this->render("foot"); } but it causes the head view be rendered twice if body-$id1 does not exists. do you have a better solution? in another saying, may i check the existance of body-$id1 before rendering it?

    Read the article

  • Is everyone baking the same CI cake?

    - by Brett Rigby
    I can't help but wonder about this whole Continous Integration process and wanted to know what you think about it all. From my perspective, we're constructing our own 'flavour' of NAnt/Ivy/CruiseControl.Net in-house and can't help but get the feeling that other dev shops are doing exactly the same work, but then everybody is finding out the same problems and pitfalls with it. I'm not complaining about NAnt, Ivy or CruiseControl at all, as they've been brilliant in helping our team of developers become more sure of the quality of their code, but it just seems strange that these tools are very popular, yet we're all re-inventing the CI-wheel. Is there a pre-made solution for building .Net applications, using the tools mentioned above, and if so, why aren't we all using them??

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • add several variables to dataframe, based on vector

    - by Andreas
    I am sure this is easy - but I can't figure it out right now. Basically: I have a long vector of variables: names <- c("first","second", "third") I have some data, and I now need to add the variables. I could do: data$first <- NA But since I have a long list, and I would like an automated solution. This doesn't work. for (i in 1:length(names)) (paste("data$", names[i],sep="") <- NA) The reason I want this, is that I need to vertically merge to dataframes, where one doesn't have all the variables it should have. Thanks in advance

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • Linux c++ error: undefined reference to 'dlopen'

    - by lerax
    Hi all! I work in Linux with c++ (eclipse) and want to use a library. Eclipse shows me an error: undefined reference to 'dlopen' Do you know a solution? Here is my code. #include <stdlib.h> #include <stdio.h> #include <dlfcn.h> int main(int argc, char **argv) { void *handle; double (*desk)(char*); char *error; handle = dlopen ("/lib/CEDD_LIB.so.6", RTLD_LAZY); if (!handle) { fputs (dlerror(), stderr); exit(1); } desk= dlsym(handle, "Apply"); if ((error = dlerror()) != NULL) { fputs(error, stderr); exit(1); } dlclose(handle); }

    Read the article

  • Problem with referencing CSS and Javascript fiels relativ

    - by Markus
    I have an IIS web site. This web site contains other web sites so the structure is like that. \ MainWebSite\ App1\ App2\ All sites are Asp.net MVC Webapplications. In the MasterPage of App1 i reference the Script-files like that <script type="text/javascript" src="../../Scripts/jquery-ui-1.8.custom.min.js"></script> The Problem is that he now tries to find the File at http:\server\MainWebSite\Scripts.... how can i workaround that? should i put all my Scripts and CSS files into the root directory is that a preferred solution?

    Read the article

  • How to flush output after each `echo` call?

    - by CuSS
    Hi all! I've a php script that only produces logs to the client. When I echo something, i wan't it to be transfered to client on-the-fly. (Because while the script is processing, the page is blank) I had already played arround with ob_start() and ob_flush(), but they didn't worked. What's the best solution? PS: it is a little dirty to put a flush at the end of the echo call... EDIT: Neither the Answers worked, PHP or Apache Fault? Thanks in advance, José Moreira Sorry for my bad English. ;)

    Read the article

  • Database structure for storing Bank-like accounts and transactions

    - by user1241320
    We're in the process of adding a bank-like sub-system to our own shop. We already have customers, so each will be given a sort of account and transactions of some kind will be possible (adding to the account or subtracting from it). So we at least need the account entity, the transaction one and operations will then have to recalculate overall balances. How would you structure your database to handle this? Is there any standard bank system have to use that I could mock? By the way, we're on mysql but will also look at some nosql solution for performance boost.

    Read the article

  • Passing HttpFileCollectionBase to the Business Layer - Bad?

    - by Terry_Brown
    hopefully there's an easy solution to this one. I have my MVC2 project which allows uploads of files on certain forms. I'm trying to keep my controllers lean, and handle the processing within the business layer of this sort of thing. That said, HttpFileCollectionBase is obviously in the System.Web assembly. Ideally I want to call to something like: UserService.SaveEvidenceFiles(MyUser user, HttpFileCollectionBase files); or something similar and have my business layer handle the logic of how and where these things are saved. But, it feels a little icky to have my models layer with a reference to System.Web in terms of separation of concerns etc. So, we have (that I'm aware of) a few options: the web project handling this, and my controllers getting fatter mapping the HttpFileCollectionBase to something my business layer likes passing the collection through, and accepting that I reference System.Web from my business project Would love some feedback here on best practice approaches to this sort of thing - even if not specifically within the context of the above.

    Read the article

  • Using eval() in Javascript to unpack the array

    - by gnomixa
    I have an array that I need to unpack. So, from something like var params = new Array(); params.push("var1"); params.push("var2"); I need to have something like "var1", "var2". I tried using eval, but eval() gives me something like var1, var2...i don't want to insert quotes myself as the vars passed can be integers, or other types. I need to pass this to a function, so that's why i can't just traverse the array and shove it into a string. What is the preferred solution here?

    Read the article

  • Doing a generic <sql:query> in Grails

    - by melling
    This is a generic way to select data from a table and show the results in an HTML table using JSP taglibs. What is the generic way to do this in Grails? That is, take a few lines of SQL and generate an HTML table from scratch in Grails, including the column names as headers. <sql:query var="results" dataSource="${dsource}" select * from foo </sql:query (# of rows: ${results.rowCount}) <table border="1" <!-- column headers -- <tr bgcolor=cyan <c:forEach var="columnName" items="${results.columnNames}" <th<c:out value="${columnName}"/</th </c:forEach </tr <!-- column data -- <c:forEach var="row" items="${results.rowsByIndex}" <tr <c:forEach var="column" items="${row}" <td<c:out value="${column}"/</td </c:forEach </tr </c:forEach </table The solution to this was answered in another StackOverFlow question. http://stackoverflow.com/questions/425294/sql-database-views-in-grails IF SOMEONE WRITES A GOOD ANSWER, I'LL ACCEPT IT. I would like a 100% acceptance on all of my questions.

    Read the article

  • Buy or build tool for Data Reporting ?

    - by Manoj
    We have been asked to provide a data reporting solution. The followng are the requirements: i. The client has a lot of data which is generated everyday as an outcome of the tests they run. These tests are run at several sites and they get automatically backed up into a central server. ii. They already have perl scripts which post process them and generates excel based reports. iii. They need a web based interface for comparing those reports and they need to mark and track issues which might be present in those data. I am confused if we should build our own tool for this or we should go for already exiting tool(any suggestions?). Can you please provide supportive arguments for the decision that you would suggest?

    Read the article

  • ea: import requirements from csv file

    - by bolekprez
    During import requirements from csv file I have a message: Bad object type when creating new record of type '' File I was trying to import: GUID$Name$Notes$Scope {BF467CF6-FF97-4dd4-894C-3F09E713678C}$NameOfReq$description$Public {71B26F9A-5418-499e-B635-F2DB158D3FF1}$Requirement1$$Public {0}$Requir1$blah$Public First 2 (+header) lines becomes from existing requirements and there is no problem with import. Last line should create a new object of requirement in enterprise architect, but there is a message mentioned above. Any solution? How should proper file to create (import from csv file) a new requirements look like?

    Read the article

  • [Wordpress MU] Changing the uploads directory

    - by Pedro Reis
    Hi, I've looked everywhere and while there is solutions to change the uploads directory for all the blogs by changing this line in the wp-settings.php: define( "BLOGUPLOADDIR", WP_CONTENT_DIR . "/blogs.dir/{$wpdb->blogid}/files/" ); I can't find a way of changing the directory for each blog individually, something like: define( "BLOGUPLOADDIR", WP_CONTENT_DIR . "/blogs.dir/{$blog_name}/files/" ); But I have no idea how could I get the name of the blog from within the wp-settings.php as you can't use get_bloginfo('name'); outside of the template. Anybody with a solution for this?

    Read the article

  • How can you set a time limit for a PowerShell script to run for?

    - by calrain
    I want to set a time limit on a PowerShell (v2) script so it forcibly exits after that time limit has expired. I see in PHP they have commands like set_time_limit and max_execution_time where you can limit how long the script and even a function can execute for. With my script, a do/while loop that is looking at the time isn't appropriate as I am calling an external code library that can just hang for a long time. I want to limit a block of code and only allow it to run for x seconds, after which I will terminate that code block and return a response to the user that the script timed out. I have looked at background jobs but they operate in a different thread so won't have kill rights over the parent thread. Has anyone dealt with this or have a solution? Thanks!

    Read the article

  • Screen capture during testing

    - by Edwward
    This is an application for reviewing performance tests. Simple in concept, tricky to describe. Picture: 1) Recording interactions with a WPF program so the inputs can be played back. 2) Playing the inputs back while doing a continuous screen capture. 3) Capturing wall time as well as continuous CPU percentages during playback. 4) Repeating steps (2) and (3) lots of times. 5) Writing the relevant stuff out to files/db. 6) Reading it and putting it all in a fancy UI for easy review/analysis. The killer for me is (2). I could use some guidance on a good, possibly commercial, screen capture SDK. I would also welcome the news that my whole problem already has a solution. And of course any thoughts on the overall idea would also be great. Thanks. Ed

    Read the article

  • JAXWS serves only 100 requests, how to configure JAXWS to change it to unlimited?

    - by cbz
    Hello, I'm using JAXWS for generating webservices and serving using EndPoint.publish() as well as deploying war file, but as soon as it has served 100 requests it wouldn't return 101st response. How to configure JAXWS to change this count to unlimited? EDIT: solution found, first of all it was not related to JAXWS and I'm sorry for posting it here, in my first impression I thought problem is with JAXWS but after deep exploring and debugging I found problem with my persistence layer (Hibernate) where max number of sessions allowed are 100 by default. Sorry again for making you guys to think which actually does not make sense.

    Read the article

  • One repository/multiple projects without getting mixed up?

    - by OverTheRainbow
    Hello After reading Joel's last article on Mercurial, I'm giving it a shot on XP as a single-user, single-computer source control system. One thing I'd like to check, though, is: It'd be easier to just create a repository of all the tiny projects I keep in eg. C:\VB.Net\, but the result is that the changes I make to the different projects therein (C:\VB.Net\ProjectA\, C:\VB.Net\ProjectB\, etc.) will be mixed in a single changelog. But if I use a single repository for all projects, when I do diff's or go through the change history, will I be able to filter data so that I only see changes pertaining to a given project? Otherwise, is creating repositories in each project directory the only solution? Thank you.

    Read the article

  • help with click-to-copy-into-clipboard function and open new frame/page

    - by jagarda
    Hi, Im currently building a new discount-webpage and Im looking for a solution to be able to hover over a discount button to reveal a small text without clicking. And after clicking on the button that the text in the flash button is copied into the clipboard and a new page open with a small frame from my window with the new page underneith. examples of this kind of script can be viewed on this page: http://www.retailmenot.com/ on the yellow boxes with the discount text. Im not looking for a free script or for someone to do this for me. All I want to know if there is another language to do this with like possibly java? and maybe some links to tutorials. (the current page solves this with flash which I havent worked with before.)

    Read the article

  • Disable link with the prototype observe method...

    - by enyo
    I want to create a link like this: <a href="http://example.com">text</a> and replace the behavior so that the link downloads the content with Ajax instead when clicking. It is important for me not to replace the href attribute (so copying the link still works). One solution would be to do: $('link').onclick = function() { return false; }; but I would like to use the .observe method. But this doesn't work: $('link').observe('click', function() { return false; }); (which is quite logical). Any ideas on how I could achieve this? Thanks.

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • Multiple ASP.Net Controls On One Page

    - by Duracell
    We have created a User Control for ASP.Net. At any one time, a user's profile page could contain between one and infinity of these controls. The Control.ascx file contains quite a bit of javascript. When the control is rendered by .Net to HTML, you notice that it prints the javascript for each control. This was expected. I'd like to reduce the amount of HTML output by the server to increase page load times. Normally, you could just move the javascript to an external file and then you only need one extra HTTP request which will serve for all controls. But what about instances in the javascript where we have something like document.getElementById('<%= txtTextBox.ClientID %'); How would the javascript know which user control work with? Has anyone done something like this, or is the solution staring me in the face?

    Read the article

< Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >