Search Results

Search found 18475 results on 739 pages for 'log diff'.

Page 260/739 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Execuitng script in threads

    - by Pedro Magalhaes
    Hi. I wanna make an app that executes remote scripts. I am going to design it like a Windows Service that listen on tcp/ip port. Every new request I will execute a python scripts. So I can handle any number of tcp/ip request at same time, so I will need to execute python script in separate threads. How can I do that? Is that simple? The script will share some objects, like Log files(text files) and other modules. In the log file example every script will write in the same file. So the app (windows service) will be responsible for do that. I need to make this objects thread safe, right?

    Read the article

  • AppDomain.UnhandeledException event not fired

    - by Yaakov Davis
    In a WPF application, the app simply crashes, without the above event being fired. (I'm also registered to DispatcherUnhandeledException, which doesn't fire as well.) I conclude that it doesn't fire since the handler is defined to place a log entry. When looking at the log, there's no corresponding entry. It happens in a production environment; I'm unable to point at a particular scenario. I've read few descriptions on scenarios where this might happen, but I still don't have a clear grasp on this. Can anyone share his experience / knowledge on this? How can I find the root of the crash and solve it? Many thanks.

    Read the article

  • Sockets, Threads and Services in android, how to make them work together ?

    - by Spredzy
    Hi all, I am facing a probleme with threads and sockets I cant figure it out, if someone can help me please i would really appreciate. There are the facts : I have a service class NetworkService, inside this class I have a Socket attribute. I would like it be at the state of connected for the whole lifecycle of the service. To connect the socket I do it in a thread, so if the server has to timeout, it would not block my UI thread. Problem is, into the thread where I connect my socket everything is fine, it is connected and I can talk to my server, once this thread is over and I try to reuse the socket, in another thread, I have the error message Socket is not connected. Questions are : - Is the socket automatically disconnected at the end of the thread? - Is their anyway we can pass back a value from a called thread to the caller ? Thanks a lot, Here is my code public class NetworkService extends Service { private Socket mSocket = new Socket(); private void _connectSocket(String addr, int port) { Runnable connect = new connectSocket(this.mSocket, addr, port); new Thread(connect).start(); } private void _authentification() { Runnable auth = new authentification(); new Thread(auth).start(); } private INetwork.Stub mBinder = new INetwork.Stub() { @Override public int doConnect(String addr, int port) throws RemoteException { _connectSocket(addr, port); _authentification(); return 0; } }; class connectSocket implements Runnable { String addrSocket; int portSocket; int TIMEOUT=5000; public connectSocket(String addr, int port) { addrSocket = addr; portSocket = port; } @Override public void run() { SocketAddress socketAddress = new InetSocketAddress(addrSocket, portSocket); try { mSocket.connect(socketAddress, TIMEOUT); PrintWriter out = new PrintWriter(mSocket.getOutputStream(), true); out.println("test42"); Log.i("connectSocket()", "Connection Succesful"); } catch (IOException e) { Log.e("connectSocket()", e.getMessage()); e.printStackTrace(); } } } class authentification implements Runnable { private String constructFirstConnectQuery() { String query = "toto"; return query; } @Override public void run() { BufferedReader in; PrintWriter out; String line = ""; try { in = new BufferedReader(new InputStreamReader(mSocket.getInputStream())); out = new PrintWriter(mSocket.getOutputStream(), true); out.println(constructFirstConnectQuery()); while (mSocket.isConnected()) { line = in.readLine(); Log.e("LINE", "[Current]- " + line); } } catch (IOException e) {e.printStackTrace();} } }

    Read the article

  • get all domain names on network

    - by user175084
    i need to get the list of domain names on my network... but i am only getting the domain name with which i log into... so for example there are 2 domains "xyz" and "xyz2" but i get only the domain with which i log into.... here is my code: if (!IsPostBack) { StringCollection adDomains = this.GetDomainList(); foreach (string strDomain in adDomains) { DropDownList1.Items.Add(strDomain); } } } private StringCollection GetDomainList() { StringCollection domainList = new StringCollection(); try { DirectoryEntry en = new DirectoryEntry("LDAP://"); // Search for objectCategory type "Domain" DirectorySearcher srch = new DirectorySearcher("objectCategory=Domain"); SearchResultCollection coll = srch.FindAll(); // Enumerate over each returned domain. foreach (SearchResult rs in coll) { ResultPropertyCollection resultPropColl = rs.Properties; foreach (object domainName in resultPropColl["name"]) { domainList.Add(domainName.ToString()); } } } catch (Exception ex) { Trace.Write(ex.Message); } return domainList; }

    Read the article

  • SQL Query to delete oldest rows over a certain row count?

    - by Casey
    I have a table that contains log entries for a program I'm writing. I'm looking for ideas on an SQL query (I'm using SQL Server Express 2005) that will keep the newest X number of records, and delete the rest. I have a datetime column that is a timestamp for the log entry. I figure something like the following would work, but I'm not sure of the performance with the IN clause for larger numbers of records. Performance isn't critical, but I might as well do the best I can the first time. DELETE FROM MyTable WHERE PrimaryKey NOT IN (SELECT TOP 10,000 PrimaryKey FROM MyTable ORDER BY TimeStamp DESC)

    Read the article

  • Stack trace in website project, when debug = false

    - by chandmk
    We have a website project. We are logging unhanded exceptions via a appdomain level exception handler. When we set debug= true in web.config, the exception log is showing the offending line numbers in the stack trace. But when we set debug = false, in web.config, log is not displaying the line numbers. We are not in a position to convert the website project in to webapplication project type at this time. Its legacy application and almost all the code is in aspx pages. We also need to leave the project in 'updatable' mode. i.e. We can't user pre-compile option. We are generating pdb files. Is there anyway to tell this kind of website projects to generate the pdb files, and show the line numbers in the stack trace?

    Read the article

  • Instant Messenger: How does gtalk/yahoo messenger populate the contact list?

    - by Owen
    Hi All, We are currently working on a small IM project which pretty much works like gtalk and yahoo messenger. We came across a problem that puzzled us how gtalk/ym populate their contact lists. Given that the user has let's say more or less 500 contacts, both IMs seem to readily load the contacts pretty fast and already sorted. Here are my questions(referring to either): Does it cache its contacts, like saving it in a file somewhere upon exit so that upon log-in it readily extracts the contacts and displays it in its contact list? Does it always request for the VCARDS upon log in? OR they have a VCARD push or whatever that simply updates the contacts' profiles (like that of their status [presence push - available, busy, etc...] )?

    Read the article

  • Implementing Role based Helpers

    - by Cynics
    So my question is how would you implement your handwritten Helpers based on the role of current user. Would it be efficient to change the behaviour at request time? e.g. the Helper somehow figures out the role of user, and include the proper SubModule? module ApplicationHelper module LoggedInHelper # Some functions end module GuestHelper # The Same functions end # If User is Guest then include GuestHelper # If User is LoggedIn then include LoggedInHelper end Is it efficient this way? is it rails way? I've got a whole bunch of function that act like this, and I don't want to wrap every single one of them in an if statement def menu_actions if current_user.nil? # User is guest { "Log in" => link_to "Login", "/login" } else # User is Logged In { "Log out" => link_to "Logout", "/logout" } end end Thank you for your time and thoughts.

    Read the article

  • MySQL Insert Statement Queue

    - by Justin
    We are building an ajax application in which a users input is submitted for processing to a php script. We are currently writing every request to a log file for tracking. I would like to move this tracking into a database table but I do not want to run a insert statement after request. What I would like to do is set up a 'queue' of transactions (inserts and updates) that need to be processed on the MySQL database. I would then set up a cron job or process to check and process the transactions in the queue. Is there something out there that we could build upon or do we have to just write to plain ol' text log files and process them?

    Read the article

  • Bug in Mathematica's Integrate with PrincipalValue->True

    - by Janus
    It seems that Mathematica's handling of principal value integrals fails on some corner cases. Consider these two expressions (which should give the same result): Integrate[UnitBox[x]/(x0 - x), {x, -Infinity, Infinity}, PrincipalValue -> True, Assumptions -> {x0 > 0}] /. x0 -> 1 // Simplify Integrate[UnitBox[x]/(x0 - x) /. x0 -> 1, {x, -Infinity, Infinity}, PrincipalValue -> True] In Mathematica 7.0.0 I get I Pi+Log[3] Log[3] Has this been fixed in later versions? Does anybody have an idea for a (more or less) general workaround?

    Read the article

  • DataNucleus Enhancer flakey?

    - by KevMo
    I'm creating a GWT app in Google App Engine, and using Google data store. Does anybody else have the problem of the DataNucleus being flakey as all get out? I can save a class, and DataNucleus will do it's thing just fine. If I change ANYTHING in the class (even adding whitespace) and then save, I get the following error: DataNucleus Enhancer completed with success for 0 classes. Timings : input=37 ms, enhance=0 ms, total=37 ms. Consult the log for full details DataNucleus Enhancer completed and no classes were enhanced. Consult the log for full details Once I clean my project, DataNucleus is happy again. Is this common when using eclipse? Is there a workaround?

    Read the article

  • What are the common issues that can cause slow boot times of Windows CE6 Images?

    - by Psychic
    I am relatively new to Platform Builder, and whilst I am able to produce nk.bin files, they boot very slowly, 80-100 seconds, so I think there may be some checkbox somewhere that I need to set (or clear)! I've already removed kitl, profiling, etc in the project settings, and set the project to 'release build' & 'ship'. When I looked at the startup event log (in debug), there doesn't appear to be any specific point where it is slow. The log pretty much scrolls all the way through with no major pauses. One thing I found strange was that although the nk.bin file was a lot smaller in release build (just under 12Mb), the boot time didn't noticeably change from the debug build... The board is a Vortex86DX_60A and I'm building CE6. Are there any 'common builder mistakes' that I may be missing here, or is this going to be something a little deeper?

    Read the article

  • Renaming Functions during runtime in PHP.

    - by The Rook
    In PHP 5.3 is there a way to rename a function or "hook" a function. There is the rename_function() within "APD" which has been broken since ~2004. If you try and build it on PHP 5.3 you'll get this error: 'struct _zend_compiler_globals' has no member named 'extended_info' This is a really easy error to fix, just change this line: GC(extended_info) = 1; to CG(compiler_options) |= ZEND_COMPILE_EXTENDED_INFO; I modified my php.ini and the APD shows up in my phpinfo() as it should. However when i call rename_function() the PHP page doesn't load and I get a segmentation fault in my /var/log/apache2/error.log. Is there anyway to fix APD to work with a modern version of PHP? Or is there another method to rename functions? Why on earth is vital feature not in php!??!?! (Gotta love python :)

    Read the article

  • Authlogic auto login fails on registration with STI User model

    - by Wei Gan
    Authlogin by default is supposed to auto login when the user's persistence token changes. It seems to fail in my Rails app. I set up the following single table inheritance user model hierarchy: class BaseUser < ActiveRecord::Base end class User < BaseUser acts_as_authentic end create_table "base_users", :force => true do |t| t.string "email" t.string "crypted_password" t.string "persistence_token" t.string "first_name" t.string "last_name" t.datetime "created_at" t.datetime "updated_at" t.string "type" end To get auto login to work, I need to explicitly log users in in my UsersController: def create @user = User.new(params[:user]) if @user.save UserSession.create(@user) # EXPLICITLY LOG USER IN BY CREATING SESSION flash[:notice] = "Welcome to Askapade!" redirect_to_target_or_default root_url else render :action => :new end end I was wondering if it's anything to do with STI, or that the table is named "base_users" and not "users". I set it up before without STI and it worked so I'm wondering why once I put in place this hierarchy, it fails. Thanks!

    Read the article

  • How do I return a bit from a stored procedure with nHibernate

    - by tigermain
    I am using nHibernate in my project but I have a stored procedure which just returns a boolen of success or now. How do I code this in c#? I have tried the following but it doesnt like cause I dont have a mapping for bool!!! {"No persister for: System.Boolean, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"} IQuery query = NHibernateSession.CreateSQLQuery("EXEC MyDatabase.dbo.[ContentProvider_Import] :ContentProviderImportLogId", "success", typeof(bool)) .SetInt32("ContentProviderImportLogId", log.Id); var test = query.UniqueResult<bool>(); and the same result from IQuery query = NHibernateSession.CreateSQLQuery("EXEC MyDatabase.dbo.[ContentProvider_Import] :ContentProviderImportLogId") .AddEntity(typeof(bool)) .SetInt32("ContentProviderImportLogId", log.Id); var test = query.UniqueResult<bool>();

    Read the article

  • What CPAN module can summarize arbitrary error logs?

    - by mithaldu
    I'm maintaining some website code that will soon dump all its errors and warnings into a log file. In order to make this a bit more pro-active I plan to parse this log file daily, summarize the warnings and errors (i.e. count the occurrence of each specific one and group by either warning/error) and then email this to the devs on the project. This would likely admittedly be rather trivial with a hash and some further fiddling, I wondered if there is a suitable module on CPAN that I could use to do this task. It would either be one that summarizes specifically Perl error/warnings logs or one that summarizes arbitrary text files. Any suggestions?

    Read the article

  • Using the stadard Java logging, is it possible to restart logs after a certain period?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Problem building PyGTK on CentOS

    - by Marcelo Cantos
    I am trying to build PyGTK on CentOS for a non-standard Python (2.6, vs the out-of-the-box 2.4). It requires that I first build pygobject. pygobject-2.18.0 fails at the configure step. The error messages is as follows: checking for GLIB - version >= 2.14.0... no *** Could not run GLIB test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means GLIB is incorrectly installed. configure: error: maybe you want the pygobject-2-4 branch? I have downloaded, built and successfully installed glib. The config.log file contains the following output: configure:6893: gcc -E conftest.c conftest.c:13:28: error: ac_nonexistent.h: No such file or directory What am I doing wrong?

    Read the article

  • Need help getting Suspend to work in Ubuntu on laptop

    - by Aerik
    I've been doing a lot of research, but I've got to admit right out front that I'm not even sure exactly what is the right question. I've installed Kubuntu 10.4 on a Panasonic Toughbook CF-29. When I try to uses "suspend", the screen flickers, and then the hard drive light goes off but the power light stays on. I looked at the /var/log/pm-suspend.log, but I don't seen any errors... though I'm not sure what success should look like either. So I guess my real question is a bit more accurately stated as "How do I troubleshoot suspend not working in Kubuntu on a laptop?" Thanks, Aerik

    Read the article

  • Expanding existing DVCS Wiki

    - by A Lion
    A portion of my job is to maintain technical documentation for a rapidly expanding manufacturing company. Because it is only a portion of my job and the company's product line is expanding so quickly, I can't stay on top of the documentation. As a result, I've been yearning for an information management system with a handful of specific features. I've found many products that have a subset, but none that have all the features I'm looking for. I'm at the point of picking an existing product and expanding it to cover my desired feature set, however, this will be a pet project and I will be learning the underlying language as I go. So, the main question is which existing product will be the easiest to expand to cover the full feature set and has a relatively easy to learn language? Alternatively, have I missed another existing program that will cover the feature set or should be in my list of "close, but not quite there"? Feature Set web interface based on a distributed version control system (e.g., git) easy to edit by logged in novices (e.g. wiki, multimarkdown) outputs in more traditional formats (e.g., doc, odt, pdf) edits held in queue until editor/engineer/manager approves them (e.g., MS Word editing) [this is the really big elephant in list - suggestions on where to start appreciated] edits held in queue specifically for engineer approval [extra limb of the elephant in the list] well-supported in the open source community Closest, but not quite there ikiwiki - http://ikiwiki.info (php) lots of awesome functionality and extensions, including easy to edit and based on DVCS lacks a review/forward for review queue appears to be well-supported within the OSS community gitit - http://gitit.net/ (haskell) easy to edit and based on DVCS lots of outputs in traditional formats a great web-based gui diff interface lacks a review/forward for review queue appears to be primarily maintained by one individual

    Read the article

  • Using java.util.logging, is it possible to restart logs after a certain period of time?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Writing to a comet stream using tomcat 6.0

    - by user301247
    Hey I'm new to java servlets and I am trying to write one that uses comet so that I can create a long polling Ajax request. I can successfully start the stream and perform operations but I can't write anything out. Here is my code: public class CometTestServlet extends HttpServlet implements CometProcessor { /** * */ private static final long serialVersionUID = 1070949541963627977L; private MessageSender messageSender = null; protected ArrayList<HttpServletResponse> connections = new ArrayList<HttpServletResponse>(); public void event(CometEvent cometEvent) throws IOException, ServletException { HttpServletRequest request = cometEvent.getHttpServletRequest(); HttpServletResponse response = cometEvent.getHttpServletResponse(); //final PrintWriter out = response.getWriter(); if (cometEvent.getEventType() == CometEvent.EventType.BEGIN) { PrintWriter writer = response.getWriter(); writer.println("<!doctype html public \"-//w3c//dtd html 4.0 transitional//en\">"); writer.println("<head><title>JSP Chat</title></head><body bgcolor=\"#FFFFFF\">"); writer.println("</body></html>"); writer.flush(); cometEvent.setTimeout(10 * 1000); //cometEvent.close(); } else if (cometEvent.getEventType() == CometEvent.EventType.ERROR) { log("Error for session: " + request.getSession(true).getId()); synchronized(connections) { connections.remove(response); } cometEvent.close(); } else if (cometEvent.getEventType() == CometEvent.EventType.END) { log("End for session: " + request.getSession(true).getId()); synchronized(connections) { connections.remove(response); } PrintWriter writer = response.getWriter(); writer.println("</body></html>"); cometEvent.close(); } else if (cometEvent.getEventType() == CometEvent.EventType.READ) { //handleReadEvent(cometEvent); InputStream is = request.getInputStream(); byte[] buf = new byte[512]; do { int n = is.read(buf); //can throw an IOException if (n > 0) { log("Read " + n + " bytes: " + new String(buf, 0, n) + " for session: " + request.getSession(true).getId()); } else if (n < 0) { //error(cometEvent, request, response); return; } } while (is.available() > 0); } } Any help would be appreciated.

    Read the article

  • Am I reindexing this Sphinx index correctly?

    - by Ethan
    According to the Thinking Sphinx docs... Turning on delta indexing does not remove the need for regularly running a full re-index ... So I set up this cron job... 50 10 * * * cd /var/www/my_app/current && /opt/ruby/bin/rake thinking_sphinx:index RAILS_ENV=production >> /var/www/my_app/current/log/reindexing.log 2>&1 Is that a reasonable way to do it? Should I be doing something different?

    Read the article

  • Make: how to force make?

    - by HH
    The command $ make all gives errors such as rm: cannot remove '.lambda': No such file or directory so it stops. How can I force-make? Makefile all: make clean make .lambda make .lambda_t make .activity make .activity_t_lambda clean: rm .lambda .lambda_t .activity .activity_t_lambda .lambda: awk '{printf "%.4f \n", log(2)/log(2.71828183)/$$1}' t_year > .lambda .lambda_t: paste .lambda t_year > .lambda_t .activity: awk '{printf "%.4f \n", $$1*2.71828183^(-$$1*$$2)}' .lambda_t > .activity .activity_t_lambda: paste .activity t_year .lambda | sed -e 's@\t@\t\&\t@g' -e 's@$$@\t\\\\@g' | tee > .activity_t_lambda > ../RESULTS/currentActivity.tex

    Read the article

  • Compressing large text data before storing into db?

    - by Steel Plume
    Hello, I have application which retrieves many large log files from a system LAN. Currently I put all log files on Postgresql, the table has a column type TEXT and I don't plan any search on this text column because I use another external process which nightly retrieves all files and scans for sensitive pattern. So the column value could be also a BLOB or a CLOB, but now my question is the following, the database has already its compression system, but could I improve this compression manually like with common compressor utilities? And above all WHAT IF I manually pre-compress the large file and then I put as binary into the data table, is it unuseful as database system provides its internal compression?

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >