Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 548/2372 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • Help with infrequent segmentation fault in accessing struct

    - by Sarah
    I'm having trouble debugging a segmentation fault. I'd appreciate tips on how to go about narrowing in on the problem. The error appears when an iterator tries to access an element of a struct Infection, defined as: struct Infection { public: explicit Infection( double it, double rt ) : infT( it ), recT( rt ) {} double infT; // infection start time double recT; // scheduled recovery time }; These structs are kept in a special structure, InfectionMap: typedef boost::unordered_multimap< int, Infection > InfectionMap; Every member of class Host has an InfectionMap carriage. Recovery times and associated host identifiers are kept in a priority queue. When a scheduled recovery event arises in the simulation for a particular strain s in a particular host, the program searches through carriage of that host to find the Infection whose recT matches the recovery time (double recoverTime). (For reasons that aren't worth going into, it's not as expedient for me to use recT as the key to InfectionMap; the strain s is more useful, and coinfections with the same strain are possible.) assert( carriage.size() > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; for ( it = ret.first; it != ret.second; it++ ) { if ( ((*it).second).recT == recoverTime ) { // produces seg fault carriage.erase( it ); } } I get a "Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address..." on the line specified above. The recoverTime is fine, and the assert(...) in the code is not tripped. As I said, this seg fault appears 'randomly' after thousands of successful recovery events. How would you go about figuring out what's going on? I'd love ideas about what could be wrong and how I can further investigate the problem.

    Read the article

  • Consuming RSS Feed In PHP

    - by mrduclaw
    I'm trying to use an RSS feed from my blog on the news section on another site. Everything seems to be working fine until I use something like an ellipsis on my blog. The expected output is: One more time…less fail Although this is no joking matter… The actual output is: One more time?less fail Although this is no joking matter… The problem is that ? should be a .... The code I'm using is the same for the first line (the blog title) and the second line (the blog contents) and that code is: $a = utf8_decode($a); print("$a"); Where $a is the string from the RSS feed. Can anyone point in the right direction why that code would work correctly for the body (second line) and not for the title (first line)? Or suggest a better way to do this? Thanks!

    Read the article

  • importing content into drupal.

    - by anru
    I have a wordpress site with 5k post and each post has average 25 comments. so 125k total nodes have to be added. I need import those posts and comments into drupal 6 . I have written a script to import those post/comments into drupal by drupal's cron service. but the cron service keeps time out. because import 125k nodes one by one is very slow. what can i do to imporve drupal importing speed? i am use drupal built in node_save(), comment_save() method to do it. I have not find out a way to use customized SQL query to increase importing speed yet. I am execute my script through drupals's cron.php, that mean even i have set 'max_execute_time' to unlimited, but that only affects PHP , apache server has it own time out setting.

    Read the article

  • Are there any programs to aid in the mass-editing of Visual SourceSafe checkin comments?

    - by Schnapple
    I know that in Visual SourceSafe you can go in and drill down to the history of an individual file and then drill down to an individual check-in and apply a comment to the check-in that way but that's tedious and time consuming - if you have a lot of files that were checked in at the same time and you want the same comment to apply to all of them this will take forever. I use the tool VSSReporter to generate reports of checkins and other stuff from VSS, but it cannot edit anything, only report on them. Are there any tools which will let you go back and retroactively apply comments to check-ins in an efficient and easy manner?

    Read the article

  • How to calculate dawn / dusk times

    - by ryyst
    Hi, I'm currently using this code to calculate the sunrise / sunset times. (To be more precise, I'm looking for civil dawn / civil dusk times which are defined as the time when the sun is between 0° and -6° altitude). As a next step, I'd like to compute the dawn beginning and dusk ending times. I believe the calculations must be very similar. My idea is that if I want to calculate the dawn beginning (dusk ending) time for a place I just calculate the sunrise (sunset) times for a place 6° farther east (west). Can somebody confirm this assumption, or am I thinking wrong? Thanks for answers! -- Ry

    Read the article

  • Execute linux AT Command via PHP

    - by ahmad Rabie
    When I run this code via ssh echo wget http://domain.com/send_me_email.php | at 12:54 it run correctly and send me an email at that time. but if I run a php Like this exec("echo wget http://domain.com/send_me_email.php | at 12:54"); exec("atq",$arr); print_r($arr); result of that code is something like this : job 63 at 2011-11-27 12:54 ,As you can see the job created successfully but I don't receive any Email at that time?! I test this line in php exec("wget http://domain.com/send_me_email.php"); and it send me an email, it means that I have permission to run exec and wget via php.but what is problem? I cant understand what is my problem. Please help me. thanks

    Read the article

  • Alright to truncate database tables when also using Hibernate?

    - by Marcus
    Is it OK to truncate tables while at the same time using Hibernate to insert data? We parse a big XML file with many relationships into Hibernate POJO's and persist to the DB. We are now planning on purging existing data at certain points in time by truncating the tables. Is this OK? It seems to work fine. We don't use Hibernate's second level cache. One thing I did notice, which is fine, is that when inserting we generate primary keys using Hibernate's @GeneratedValue where Hibernate just uses a key value one greater than the highest value in the table - and even though we are truncating the tables, Hibernate remembers the prior value and uses prior value + 1 as opposed to starting over at 1. This is fine, just unexpected. Note that the reason we do truncate as opposed to calling delete() on the Hibernate POJO's is for speed. We have gazillions of rows of data, and truncate is just so much faster.

    Read the article

  • binary search and trio

    - by user121196
    given a large list of alphabetically sorted words in a file,I need to write a program that, given a word x, determines if x is in the list. priorties: 1. speed. 2. memory I already know I can use (n is number of words, m is average length of the words) 1. a tri, time is O(log(n)), space(best case) is O(log(n*m)), space(worst case) is O(n*m). 2. load the complete list into memory, then binary search, time is O(log(n)), space is O(n*m) I'm not sure about the complexity on tri, please correct me if they are wrong. Also are there other good approaches?

    Read the article

  • iPhone status bar orientation with opengl

    - by sfider
    I have opengl only view that displays in portrait and landscape mode using projection matrix (view's transformation is identity all the time). I need to show status bar with proper orientation. I do this by setting status bar orientation property in UIApplication and changing frame of opengl view so the view won't go under status bar. When I change from landscape to portrait (landscape is the initial state) view's frame is set to (0, 20, 320, 460) and stays like this. However view appears to be translated by (-10, -10). It seems that I did change the size of view but couldn't move it. Weird things are: initialy view is full screen, I change it to (0, 0, 300, 480) (landscape with status bar) and it works then, it doesn't work when I try to chenge it for the second time (portrait with status bar) frame property of the view shows that view is placed correctly Any thoughts on what can by the problem?

    Read the article

  • Invoking browser on streaming media URLs

    - by Maven
    I have a dirt simple little function that launches the blackberry browser on a streaming media file in order to launch the built in media player. Everything works fine but there is this annoying dialog every time from the browser asking me if I want to save or open the file. My answer is always "open" the file so is there a way I can make it default and not bring up the dialog each time? The code I'm using to launch the browser // Get the default sessionBrowserSession browserSession = Browser.getDefaultSession(); // now launch the URL browserSession.displayPage(url); This is on blackberry OS 5.0 Thanks!

    Read the article

  • Python and Gstreamer

    - by Seif Sallam
    hi, I'm creating a streaming application, using GStreamer with TCP pipeline, and i implemented start, pause, and stop. but the problem is, that i can't seek, i tried to change the playback value from the server side, then i tried on the client side, and Finally tried to change the value on both at the same time, but in all cases it doesn't work. and I even tried to pause the playback then continue but nothing happens. I'm having this problem with the seek and the volume. Any help please, I searched everywhere but i couldn't find anything that worked. this is the code that i use for seeking self.pipeline.seek_simple(gst.FORMAT_TIME, gst.SEEK_FLAG_FLUSH, time)

    Read the article

  • Multiple calls to AlarmManager.setRepeating deliver the same Intent/PendingIntent extra values, but

    - by Chris Boyle
    Solved while writing this question, but posting in case it helps anyone: I'm setting multiple alarms like this, with different values of id: AlarmManager alarms = (AlarmManager)context.getSystemService( Context.ALARM_SERVICE); Intent i = new Intent(MyReceiver.ACTION_ALARM); // "com.example.ALARM" i.putExtra(MyReceiver.EXTRA_ID, id); // "com.example.ID", 2 PendingIntent p = PendingIntent.getBroadcast(context, 0, i, 0); alarms.setRepeating(AlarmManager.RTC_WAKEUP, nextMillis, 300000, p); // 5 mins ...and receiving them like this: public void onReceive(Context context, Intent intent) { if (intent.getAction().equals(ACTION_ALARM)) { // It's time to sound/show an alarm final long id = intent.getLongExtra(EXTRA_ID, -1); The alarm is delivered to my receiver at the right times, but often with EXTRA_ID set to the wrong value: it's a value that I have used at some point, just not the one that I wanted delivered at that particular time.

    Read the article

  • Run java thread at specific times

    - by rmarimon
    I have a web application that synchronizes with a central database four times per hour. The process usually takes 2 minutes. I would like to run this process as a thread at X:55, X:10, X:25, and X:40 so that the users knows that at X:00, X:15, X:30, and X:45 they have a clean copy of the database. It is just about managing expectations. I have gone through the executor in java.util.concurrent but the scheduling is done with the scheduleAtFixedRate which I believe provides no guarantee about when this is actually run in terms of the hours. I could use a first delay to launch the Runnable so that the first one is close to the launch time and schedule for every 15 minutes but it seems that this would probably diverge in time. Is there an easier way to schedule the thread to run 5 minutes before every quarter hour?

    Read the article

  • thoughs on using Alpha Five v10 with Codeless AJAX for building an AJAX database app in a short amou

    - by william Hunter
    I need to build an AJAX application against our MS SQL Server database for my company. the app has to have user permissions and reporting and is pretty complex. I am really under the gun in terms of time. The company that I work for needs the app for an important project launch. A colleague/friend of mine in a different company recommended that I look at a product from Alpha Software called Alpha Five v10 with Codeless AJAX. He has told me that he has used it extensively and that it saves him a "serious boat load of time" and he says that he has not run into limitations because you can also write your own JavaScript or you wire in JQERY. Before I commit to Alpha Five v10, I would like to get any other opinions? Thanks. Norman Stern. Chicago

    Read the article

  • SocketTimeOutException while creating socket, java

    - by Sunil Kumar Sahoo
    Hi All, I have created a sample java socket application. I used Socket s = new Socket(ip, port) Now it works fine. but when my internet connection is very slow that time after a long interval (even if sometimes after 2 minutes) i used to get SocketTimeOutException in that line. means it gives that error while creating socket. I want the exception should be handled properly means if internet connection is very slow then if that error occurs it happens very late now . I want if this type of error occurs then it should be caught very fast means the error should not come at such a delay interval of time rather it should come immediately. How to achieve this. Thanks Sunil Kumar Sahoo

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • HTTP vs FTP upload

    - by Richard Knop
    I am building a large website where members will be allowed to upload content (images, videos) up to 20MB of size (maybe a little less like 15MB, we haven't settled on a final upload limit yet but it will be somewhere between 10-25MB). My question is, should I go with HTTP or FTP upload in this case. Bear in mind that 80-90% of uploads will be smaller size like cca 1-3MB but from time to time some members will also want to upload large files (10MB+). Is HTTP uploading reliable enough for such large files or should I go with FTP? Is there a noticeable speed difference between HTTP and FTP while uploading files? I am asking because I'm using Zend Framework which already has HTTP adapter for file uploads, in case I choose FTP I would have to write my own adapter for it. Thanks!

    Read the article

  • Creating database desktop application with data manipulation in Netbeans using Java Persistence

    - by Lulu
    It's my first time to use Persistence in developing a Java program because I usually connect via JDBC. I read that for large amounts of data, it is best to use persistence. I tried playing with the CRUD example of Netbeans. It's not very helpful thought because it only connects to the DB and allows addition and deletion of records. I need something that will allow me to manipulate the data like if the value from column C1 of table T1 is such, it will retrieve data from table t2. In short, I need to apply conditions before knowing what to retrieve exactly. The example in CRUD example already has a specific table to retrieve and only acts like a database manager. How is it possible to retrieve a specific item first then from this, will determine the next steps to be done. I'm also using embedded JavaDB/Derby as my database (also my first time to use because I usually use remote mysql)

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Setting the Identity/Principal from a MessageInspector in WCF

    - by Robert Wagner
    I am developing a WCF service that receives the user's credentials in the SOAP header. These credentials are read on the server side using a MessageInspector. So far so good. I want to set the Thread.CurrentPrincipal to a custom principal (CustomPrincipal), but when I do this from the MessageInspector, it gets overridden by the time the service is invoked. When is the best time to set the principal? Also what is the best way to pass the principal, identity or credentials from the inspector to that location?

    Read the article

  • C# Value member property repopulate the control..

    - by karthik
    Hi.. I just wanted to confirm couple of things. I) Code snippet: cmb1.Datasource= dt; cmb1.Valuemember = "value"; Here data population happens 2 time for the control, 1 More time extra,bcoz of value member getting changed after data source assigned. Is it true? II) How can I trace these re population in C#? I just wanted to debug and see and confirm? example please? Thanks Karthik

    Read the article

  • What do you do when every possible business idea is already taken?

    - by jonathanconway
    I have a bit of free time and lots of enthusiasm for software and the web. I want to make a start-up, to sell kind of product or online service, but I'm having a hard time coming up with business ideas that haven't already been implemented. For example, I thought of making an e-ordering website for ordering food from restaurants online. Good thing I typed it into Google, because the market is already full of hundreds of websites doing the same thing and competing heavily. The same thing has happened with so many other business ideas I've become excited and passionate about - they're all taken. What's your response to this? Do you agree that all the good ideas seem to be taken? Or do you think there is room for new businesses, and that I'm just not thinking (or looking) hard enough? Have you ever tried idea after idea, only to find that it was already being done, and you had to move onto something else?

    Read the article

  • java updating text area

    - by n0ob
    for my application I have to build a little customized time ticker which ticks over after whatever delay I tell it to and writes the new value in my textArea. The problem is that the ticker is running fully until the termination time and then printing all the values. How can I make the text area change while the code is running. while(tick<terminationTime){ if ((System.currentTimeMillis()) > (msNow + delay)){ msNow = System.currentTimeMillis(); tick = tick + 1; currentTime.setText(""+tick); sourceTextArea.append(""+tick+" " + System.currentTimeMillis() +" \n"); } currentTime and sourceTextArea are both text areas and both are getting updated after the while loop ends.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >