Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 728/880 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • How can I disable keep-alive on ASP.NET Web Service client requests?

    - by Matthew Brindley
    I have a few web servers behind an Amazon EC2 load balancer. I'm using TCP balancing on port 80 (rather than HTTP balancing). I have a client polling a Web Service (running on all web servers) for new items every few seconds. However, the client seems to stay connected to one server and polls that same server each time. I've tried using ServicePointManager to disable KeepAlive, but that didn't change anything. The outgoing connection still had its "connection: keep-alive" HTTP header, and the server kept the TCP connection open. I've also tried adding an override of GetWebRequest to the proxy class created by VS, which inherits from SoapHttpClientProtocol, but I still see the keep-alive header. If I kill the client's process and restart, it'll connect to a new server via the load balancer, but it'll continue polling that new server forever. Is there a way to force it to connect to a random server each time? I want the load from the one client to be spread across all of the web servers. The client is written in C# (as is the server) and uses a Web Reference (not a Service Reference), which points to the load balancer.

    Read the article

  • Problem with closing excel by c#

    - by phenevo
    Hi, I've got unit test with this code: Excel.Application objExcel = new Excel.Application(); Excel.Workbook objWorkbook = (Excel.Workbook)(objExcel.Workbooks._Open(@"D:\Selenium\wszystkieSeba2.xls", true, false, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value)); Excel.Worksheet ws = (Excel.Worksheet)objWorkbook.Sheets[1]; Excel.Range r = ws.get_Range("A1", "I2575"); DateTime dt = DateTime.Now; Excel.Range cellData = null; Excel.Range cellKwota = null; string cellValueData = null; string cellValueKwota = null; double dataTransakcji = 0; string dzien = null; string miesiac = null; int nrOperacji = 1; int wierszPoczatkowy = 11; int pozostalo = 526; cellData = r.Cells[wierszPoczatkowy, 1] as Excel.Range; cellKwota = r.Cells[wierszPoczatkowy, 6] as Excel.Range; if ((cellData != null) && (cellKwota != null)) { object valData = cellData.Value2; object valKwota = cellKwota.Value2; if ((valData != null) && (valKwota != null)) { cellValueData = valData.ToString(); dataTransakcji = Convert.ToDouble(cellValueData); Console.WriteLine("data transakcji to: " + dataTransakcji); dt = DateTime.FromOADate((double)dataTransakcji); dzien = dt.Day.ToString(); miesiac = dt.Month.ToString(); cellValueKwota = valKwota.ToString(); } } r.Cells[wierszPoczatkowy, 8] = "ok"; objWorkbook.Save(); objWorkbook.Close(true, @"C:\Documents and Settings\Administrator\Pulpit\Selenium\wszystkieSeba2.xls", true); objExcel.Quit(); Why after finish test I'm still having excel in process (it does'nt close) And : is there something I can improve to better perfomance ??

    Read the article

  • HTTPService/ResultEvent with Flex 3.2 versus Flex >= 3.5

    - by Julian
    Hey everybody, through a design decission or what-so-ever Adobe changed the content of the ResultEvent fired by a HTTPService Object. Take a look at following example: var httpService:HTTPService = myHTTPServices.getResults(); httpService.addEventListener(ResultEvent.RESULT,resultHandler); httpService.send(); /** * Handels the login process */ function resultHandler(event:ResultEvent):void { // get http service var httpService = (event.target as HTTPService); // do something } It works like a charm with Flex 3.2. But when I try to compile it with Flex 3.5 or Flex 4.0 event.target as HTTPService is null. I figured out that event.target is now an instance of HTTPOperation. That is interesting because I can't find HTTPOperation in the langref. However, I think what Flash Builder's debugger means is mx.rpc.http.Operation. The debugger also shows that event.target has a private attribute httpService which is the instance I expected to get with event.target. But it's private, so event.target.httpService doesn't work. If I only want to remove the EventListener I can cast event.target as EventDispatcher. But I need to use methods from HTTPService. So: How can I get the HTTPService instance from the ResultEvent? Any help would be appreciated. Thanks! J.

    Read the article

  • Getting a job in the games industry as a developer, just knowing a game engine

    - by numerical25
    I recently enrolled in a community college for games developement. But I am skeptical about the curriculum. I have no experience in the gaming industry so I wouldn't be able to tell whether it's a good investment or not. So I am asking you. I don't want to get too much into the details of all the classes I am taking so I will try to be brief. By the time I graduate, I should have a understanding of how a game engine works. I will be working with the Unreal Engine to develop a Multiplayer game from scratch. So in the process of my final project, I will learn how to work within the Unreal Engine, learn Python and learn how to use its API to connect to a remote server and build game mechanics. Overall I will also recieve an associates degree in game development. I learn C++ but not C. The director said he was trying to implement C in the program as well. What I notice is I will not learn how to build a 3D game engine from scratch. They do not teach any artificial intelligence (AI). I will not learn how to work with the graphics card using a graphics API such as DirectX or OpenGL. I know building a game engine from scratch is a little complex, but at the same time the track is requiring me to take some advanced mathematics courses such as calculus and geometry 1 and 2. I also got to take a physics class. I just think that's a little much for just learning how to use the Unreal Engine but not actually build one or try to learn the anatomy of a games engine. Is this good enough to possibly land my a job in the industry? If I left anything out or was not detail, please feel free to ask more questions. Edit: I do learn data structures and algorithms.

    Read the article

  • GTK implementation of MessageBox

    - by Bernard
    I have been trying to implement Win32's MessageBox using GTK. The app using SDL/OpenGL, so this isn't a GTK app. I handle the initialisation (gtk_init) sort of stuff inside the MessageBox function as follows: int MessageBox(HWND hwnd, const char* text, const char* caption, UINT type){ GtkWidget *window = NULL; GtkWidget *dialog = NULL; gtk_init(&gtkArgc, &gtkArgv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); g_signal_connect(G_OBJECT(window), "delete_event", G_CALLBACK(delete_event), NULL); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(destroy), NULL); // gcallback calls gtk_main_quit() gtk_init_add((GtkFunction)gcallback, NULL); if (type & MB_YESNO) { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_QUESTION, GTK_BUTTONS_YES_NO, text); } else { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_INFO, GTK_BUTTONS_OK, text); } gtk_window_set_title(GTK_WINDOW(dialog), caption); gint result = gtk_dialog_run(GTK_DIALOG(dialog)); gtk_main(); gtk_widget_destroy(dialog); if (type & MB_YESNO) { switch (result) { default: case GTK_RESPONSE_DELETE_EVENT: case GTK_RESPONSE_NO: return IDNO; break; case GTK_RESPONSE_YES: return IDYES; break; } } return IDOK;} Now, I am by no means an experienced GTK programmer, and I realise that I'm probably doing something(s) horribly wrong. However, my problem is that the last dialog popped up with this function stays around until the process exits. Any ideas?

    Read the article

  • Obtaining information about executable code from exe/pdb

    - by Miro Kropacek
    Hello, I need to extract code (but not data!) from classic win32 exe/dll files. It's clear I can't do this only with extraction of code segment content (because code segment contains also the data -- jump tables for example) and that I need some help from compiler. *.map files are nice but they only contain addresses of functions, i.e. the safest thing I can do is to start at that address and to process until the first return / jump instruction (because part of the function could be mentioned data) *.pdb files are better but I'm not sure what tools to use to extract information like this -- I took a look at DbgHelp and DIA SDK, the latter one seems to be the right tool but it doesn't look very simple. So my question/questions: To your knowledge, it is possible to extract information about code/data position (address + length) only via DbgHelp? If the DIA SDK is the only way, any idea what should I call for getting information like that? (that COM stuff is pretty heavy) Is there any other way? Of course my concern is about Visual Studio, C/C++ source compilation in the first place. Thanks for any hint.

    Read the article

  • Remote DLL Registration without access to HKEY_CLASSES_ROOT

    - by mohlsen
    We have a legacy VB6 application that updates itself on startup by pulling down the latest files and registering the COM components. This works for both local (regsvr32) ActiveX COM Components and remote (clireg32) ActiveX COM components registered in COM+ on another machine. New requirements are preventing us from writing to HKEY_LOACL_MACHINE (HKLM) for security reasons, which is what obviously happens by default when calling regsvr32 and clireg32. We have come up with an way to register the local COM componet under HKEY_CURRENT_USER\Software\Classes (HKCU) using the RegOverridePredefKey Windows API method. This works by redirecting the inserts into the registry to the HKCU location. Then when the COM components are instantiated, windows first looks to HKCU before looking for component information in HKLM. This replaces what regsvr32 is doing. The problem we are experiencing at this time is when we attempt to register VBR / TLB using clireg32, this registration process also adds registration keys to HKEY_LOACL_MACHINE. Is there a way to redirect clireg32.exe to register component is HKEY_CURRENT_USER? Are there any other methods that would allow us to register these COM+ components on clients machine with limited security access? Our only solution at this time would be to manually write the registration information to the registry, but that is not ideal and would be a maint issue.

    Read the article

  • Problem with testing In App with sandbox test account

    - by Nava Carmon
    Hi, I created a test user account through the Manage User Accounts in iTunes Connect. When you create such an account you have to select a valid storefront for your account. I chose US Store. Now I signed out from the store in App Store settings on my device. Ran the application and tried to perform a purchase. I successfully login with my test account. After I press Confirm when entering my credentials I get an alert, that comes from SKPaymentTransactionStateFailed from the observer. It says "Your account is only valid for purchases in the US iTunes Store". The error state = 0 unknown. Second time when i try to perform the purchase, StoreKit only asks me for a password like the previous login was successful. After entering a password I can perform a purchase. My question is whether it's only because it's a testing account and the application is not actually on AppStore? What should I do to avoid this message or at least to continue the purchasing process? Thanks a lot, Nava

    Read the article

  • How to eliminate Unhandled Exception dialog produced by 3rd party application

    - by Tappen
    I'm working with a 3rd party executable that I can't recompile (vendor is no longer available). It was originally written under .Net 1.1 but seems to work fine under later versions as well. I launch it using Process.Start from my own application (I've tried p/invoke CreateProcess as well with the same results so that's not relevant) Unfortunately this 3rd party app now throws an unhandled exception as it exits. The Microsoft dialog box has a title like "Exception thrown from v2.0 ... Broadcast Window" with the version number relating to the version of .Net it's running under (I can use a .exe.config file to target different .Net versions, doesn't help). The unhandled exception dialog box on exit doesn't cause any real problems, but is troubling to my users who have to click OK to dismiss it every time. Is there any way (a config file option perhaps) to disable this dialog from showing for an app I don't have the source code to? I've considered loading it in a new AppDomain which would give me access to the UnhandledException event but there's no indication I could change the appearence of the dialog. Maybe someone knows what causes the exception and I can fix this some other way?

    Read the article

  • How do I make a jQuery POST function open the new page?

    - by ciclistadan
    I know that a submit button in HTML can submit a form which opens the target page, but how do I cause a jQuery ajax call POST information to a new page and display the new page. I am submitting information that is gathered by clicking elements (which toggle a new class) and then all items with this new class are added to an array and POSTed to a new page. I can get it to POST the data but it seems to be working functioning in an ajax non-refreshing manner, not submitting the page and redirecting to the new page. how might I go about doing this? here's the script section: //onload function $(function() { //toggles items to mark them for purchase //add event handler for selecting items $(".line").click(function() { //get the lines item number var item = $(this).toggleClass("select").attr("name"); }); $('#process').click(function() { var items = []; //place selected numbers in a string $('.line.select').each(function(index){ items.push($(this).attr('name')); }); $.ajax({ type: 'POST', url: 'additem.php', data: 'items='+items, success: function(){ $('#menu').hide(function(){ $('#success').fadeIn(); }); } }); }); return false; }); any pointers would be great!! thanks

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • How to estimate the contribution of an individual to a software project?

    - by Amit Kumar
    I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things. The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis. My idea of an ideal tool for this purpose would: measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution) measure the decomposibility/flexibility of the software (more decomposable is better) how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library. be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code". It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately. If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has a different goal, but difficult does not mean it is not useful.

    Read the article

  • Character Encoding: â??

    - by akaphenom
    I am trying to piece together the mysterious string of characters â?? I am seeing quite a bit of in our database - I am fairly sure this is a result of conversion between character encodings, but I am not completely positive. The users are able to enter text (or cut and paste) into a Ext-Js rich text editor. The data is posted to a severlet which persists it to the database, and when I view it in the database i see those strange characters... is there any way to decode these back to their original meaning, if I was able to discover the correct encoding - or is there a loss of bits or bytes that has occured through the conversion process? Users are cutting and pasting from multiple versions of MS Word and PDF. Does the encoding follow where the user copied from? Thank you website is UTF-8 We are using ms sql server 2005; SELECT serverproperty('Collation') -- Server default collation. Latin1_General_CI_AS SELECT databasepropertyex('xxxx', 'Collation') -- Database default SQL_Latin1_General_CP1_CI_AS and the column: Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation text varchar no -1 yes no yes SQL_Latin1_General_CP1_CI_AS The non-Unicode equivalents of the nchar, nvarchar, and ntext data types in SQL Server 2000 are listed below. When Unicode data is inserted into one of these non-Unicode data type columns through a command string (otherwise known as a "language event"), SQL Server converts the data to the data type using the code page associated with the collation of the column. When a character cannot be represented on a code page, it is replaced by a question mark (?), indicating the data has been lost. Appearance of unexpected characters or question marks in your data indicates your data has been converted from Unicode to non-Unicode at some layer, and this conversion resulted in lost characters. So this may be the root cause of the problem... and not an easy one to solve on our end.

    Read the article

  • Rails apps blew up on mediatemple's (dv) server

    - by BandsOnABudget
    i managed to fix this issue but i wanted to document it here for any others whom might have similar problems. I'm running a mediatemple (dv) rage server. monit started sending me alerts that i was having resource limitations on the server. logged into plesk and the CPU was pinned at 99.9%. Rebooted the server, catastrophe avoided... Not quite - all my rails apps were not loading My Setup ruby 1.8.6 Rails 2.3.5 w/ passenger installed as an apache module. I noticed a defunct ruby process so i killed and rebooted the server but runby continued to come back as defunct. Started trolling thru the apache log and i saw errors irt updating rubygems. i updated to the latest but then continued to get gem errors. Basically, I had to go thru all my apps and update any gems manually, reboot apache, and all was restored. Not really sure the cause of the issue but wanted to note it for posterity. Anybody else in the community ever have similar issues???

    Read the article

  • Experience migrating legacy Cobol/PL1 to Java

    - by MadMurf
    ORIGINAL Q: I'm wondering if anyone has had experience of migrating a large Cobol/PL1 codebase to Java? How automated was the process and how maintainable was the output? How did the move from transactional to OO work out? Any lessons learned along the way or resources/white papers that may be of benefit would be appreciated. EDIT 7/7: Certainly the NACA approach is interesting, the ability to continue making your BAU changes to the COBOL code right up to the point of releasing the JAVA version has merit for any organization. The argument for procedural Java in the same layout as the COBOL to give the coders a sense of comfort while familiarizing with the Java language is a valid argument for a large organisation with a large code base. As @Didier points out the $3mil annual saving gives scope for generous padding on any BAU changes going forward to refactor the code on an ongoing basis. As he puts it if you care about your people you find a way to keep them happy while gradually challenging them. The problem as I see it with the suggestion from @duffymo to Best to try and really understand the problem at its roots and re-express it as an object-oriented system is that if you have any BAU changes ongoing then during the LONG project lifetime of coding your new OO system you end up coding & testing changes on the double. That is a major benefit of the NACA approach. I've had some experience of migrating Client-Server applications to a web implementation and this was one of the major issues we encountered, constantly shifting requirements due to BAU changes. It made PM & scheduling a real challenge. Thanks to @hhafez who's experience is nicely put as "similar but slightly different" and has had a reasonably satisfactory experience of an automatic code migration from Ada to Java. Thanks @Didier for contributing, I'm still studying your approach and if I have any Q's I'll drop you a line.

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • MVC 2.0 - JqGrid Sorting with Mulitple Tables

    - by Billy Logan
    I am in the process of implementing the jqGrid and would like to be able to use the sorting functionality. I have run into some issues with sorting columns that are related to the base table. Here is the script to load the grid: public JsonResult GetData(GridSettings grid) { try { using (IWE dataContext = new IWE()) { var query = dataContext.LKTYPE.Include("VWEPICORCATEGORY").AsQueryable(); ////sorting query = query.OrderBy<LKTYPE>(grid.SortColumn, grid.SortOrder); //count var count = query.Count(); //paging var data = query.Skip((grid.PageIndex - 1) * grid.PageSize).Take(grid.PageSize).ToArray(); //converting in grid format var result = new { total = (int)Math.Ceiling((double)count / grid.PageSize), page = grid.PageIndex, records = count, rows = (from host in data select new { TYPE_ID = host.TYPE_ID, TYPE = host.TYPE, CR_ACTIVE = host.CR_ACTIVE, description = host.VWEPICORCATEGORY.description }).ToArray() }; return Json(result, JsonRequestBehavior.AllowGet); } } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); } //have to return something if there is an issue return Json(""); } As you can see the description field is a part of the related table("VWEPICORCATEGORY") and the order by is targeted at LKTYPE. I am trying to figure out how exactly one goes about sorting that particular field or maybe even a better way to implement this grid using multiple tables and it's sorting functionality. Thanks in advance, Billy

    Read the article

  • Performance problem with System.Net.Mail

    - by Saif Khan
    I have this unusual problem with mailing from my app. At first it wasn't working (getting unable to relay error crap) anyways I added the proper authentication and it works. My problem now is, if I try to send around 300 emails (each with a 500k attachment) the app starts hanging around 95% thru the process. Here is some of my code which is called for each mail to be sent Using mail As New MailMessage() With mail .From = New MailAddress(My.Resources.EmailFrom) For Each contact As Contact In Contacts .To.Add(contact.Email) Next .Subject = "Accounting" .Body = My.Resources.EmailBody 'Back the stream up to the beginning orelse the attachment 'will be sent as a zero (0) byte file. attachment.Seek(0, SeekOrigin.Begin) .Attachments.Add(New Attachment(attachment, String.Concat(Item.Year, Item.AttachmentType.Extension))) End With Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") .Send(mail) End With End Using With item .SentStatus = True .DateSent = DateTime.Now.Date .Save() End With Return I was thinking, can I just prepare all the mails and add them to a collection then open one SMTP conenction and just iterate the collection, calling the send like this Using mail As New MailMessage() ... MailCollection.Add(mail) End Using ... Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") For Each mail in MainCollection .Send(mail) Next End With

    Read the article

  • Java 1.4 singleton containing a mutable field

    - by Philippe
    Hi, I'm working on a legacy Java 1.4 project, and I have a factory that instantiates a csv file parser as a singleton. In my csv file parser, however, I have a HashSet that will store objects created from each line of my CSV file. All that will be used by a web application, and users will be uploading CSV files, possibly concurrently. Now my question is : what is the best way to prevent my list of objects to be modified by 2 users ? So far, I'm doing the following : final class MyParser { private File csvFile = null; private static Set myObjects = Collections.synchronizedSet(new HashSet); public synchronized void setFile(File file) { this.csvFile = file; } public void parse() FileReader fr = null; try { fr = new FileReader(csvFile); synchronized(myObjects) { myObjects.clear(); while(...) { // foreach line of my CSV, create a "MyObject" myObjects.add(new MyObject(...)); } } } catch (Exception e) { //... } } } Should I leave the lock only on the myObjects Set, or should I declare the whole parse() method as synchronized ? Also, how should I synchronize - both - the setting of the csvFile and the parsing ? I feel like my actual design is broken because threads could modify the csv file several times while a possibly long parse process is running. I hope I'm being clear enough, because myself am a bit confused on those multi-synchronization issues. Thanks ;-)

    Read the article

  • How to correctly set the ORACLE_HOME variable on Ubuntu 9.x?

    - by nowarninglabel
    I have the same problem as listed here: http://stackoverflow.com/questions/52239/oracle-lost-sysdba-password although I did not lose the password, I entered it twice in the configure script originally, and then when I went to login (localhost:8080/apex, password not accepted. I don't have anything in the database, I just want to install and use Oracle-XE. I have tried apt-get removing it twice and reinstalling, but if I try to run /etc/init.d/oracle-xe configure again and I get "Oracle Database 10g Express Edition is already configured" despite the second time removing any folders I could find for Oracle XE. I tried running sqlplus "/ as sysdba" but all I get is: "Error 6 initializing SQL*Plus Message file sp1.msb not found SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory" I tried setting the variable via export. (also tried set). Tried: "export ORACLE_HOME=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin/sqlplus" and all the subdirectories of that. Same error every time. What is the ORACLE_HOME supposed to be set to? The only reference I have seen either just say general or say the above up to the version number then "/db_1". I do no thave a db_1. Let me know if need any clarification. Don't understand what I did wrong in this process.

    Read the article

  • Is This a Valid Way to Use Blocks in Objective-C?

    - by Carter
    I've been building a HTTP client that uses web services to synchronize information between the client and server. I've been using Blocks and NSURLConnection to achieve this on the client side, but I'm getting frequent EXC_BAD_ACCESS crashes in objc_msgSend(). From what I understand, this usually means that a stored block that has fallen off the stack has been called. I think I've coded things correctly to avoid this, but I'm still stuck. Here is conceptually what my code is doing. It starts by calling "synchronizeWithWebServer". That method invokes "listRootObjectsOnServerWithBlock:" which takes in a block to be called when the method returns. "listRootObjectsOnServersWithBlock:" initiates a NSURLConnection to the web server asynchronously. It to expects a block to be called when it returns. Inside that block I want to be able to execute the original Block (so aptly named 'block'). This is only a simplified version of my code. The real synchronization process is more complex but it's mostly more of the same as what you see below. Sometimes the code works perfectly, but about 80% of the time it crashes very early on in the routine. It seems to be more vulnerable to crashing when my data set gets larger. - (void)synchronizeWithWebServer { [self listRootObjectsOnServerWithBlock:^(NSArray *results, NSError *error) { //Iterate over result objects and perform some other similar routines. }]; } - (void)listRootObjectsOnServerWithBlock:(void (^)(NSArray *results, NSError *error))block { //Create NSURLRequest Here //Create connection asynchronously. block = [block copy]; [NSURLConnection sendAsynchronousRequest:urlRequest queue:[NSOperationQueue currentQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){ //Parse response from web server (stored in NSData *data) NSArray *results = ..... //Call 'block' block(results, error); [block release]; }]; }

    Read the article

  • Web.Routing for the site root or homepage

    - by Aquinas
    I am doing some work with Web.Routing, using it to have friendly urls and nice Rest like interfaces to a site that is essentially rendered by a single IHttpHandler. There are no webforms, the handler generates all the html/json and writes it as part of process request. This works well for things like /Sites/Accounting for example, but I can't get it to work for the site root, i.e. '/'. I have tried registering a route with an empty string, with 'default.aspx' (which is the empty aspx file I keep in my root folder to play nice with cassini and iis). I set RouteExistingFiles to false explicitly, but whatever I do when hitting the root url it still opens default.axpx, which has no code it inherits from, and contains a simple h1 tag to show that I've hit it. I don't want to change the default file to redirect to a desired route, I just want the equivalent of a 'default' route that is applied when no other routes are found, similar to MVC. For reference, the previous version of the site didn't use Web.Routing, but had a handler referenced in the web.config that was perfectly capable of intercepting requests for the root or default.aspx. Specs: ASP.NET 3.5sp1, C#, no webforms, MVC or openrasta. Plain old IHttpHandlers.

    Read the article

  • Can't send client certificate via SslStream

    - by Jonathan
    I am doing an SSL3 handshake using an SslStream, but, in spite of my best efforts, the SslStream never sends a client certificate on my behalf. Here is the code: SSLConnection = new System.Net.Security.SslStream(SSLInOutStream, false, new System.Net.Security.RemoteCertificateValidationCallback(AlwaysValidRemoteCertificate), new System.Net.Security.LocalCertificateSelectionCallback(ChooseLocalCertificate)); X509CertificateCollection CC = new X509CertificateCollection(); CC.Add(Org.BouncyCastle.Security.DotNetUtilities.ToX509Certificate(MyLocalCertificate)); SSLConnection.AuthenticateAsClient("test", CC, System.Security.Authentication.SslProtocols.Ssl3, false); and then I have AlwaysValidRemoteCertificate just returning true, and ChooseLocalCertificate returning the zeroth element of the array. The code probably looks a little weird because the project is a little weird, but I think that is beside the point here. The SSL handshake completes. The issue is that instead of sending a certificate message on my behalf (in the handshake process), with the ASN.1 encoded certificate (MyLocalCertificate), the SslStream sends an SSL alert number 41 (no certificate) and then carries on. I know this from packet sniffing. After the handshake is completed, the SslStream marks IsAuthenticated as true, IsMutuallyAuthenticated as false, and its LocalCertificate member is null. I feel like I'm probably missing something pretty obvious here, so any ideas would be appreciated. I am a novice with SSL, and this project is off the beaten path, so I am kind of at a loss. P.S. 1: My ChooseLocalCertificate routine is called twice during the handshake, and returns a valid (as far as I can tell), non-null certificate both times. P.S. 2: SSLInOutStream is my own class, not a NetworkStream. Like I said, though, the handshake proceeds mostly normally, so I doubt this is the culprit... but who knows?

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >