Search Results

Search found 6612 results on 265 pages for 'seconds'.

Page 231/265 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Faster or more memory-efficient solution in Python for this Codejam problem.

    - by jeroen.vangoey
    I tried my hand at this Google Codejam Africa problem (the contest is already finished, I just did it to improve my programming skills). The Problem: You are hosting a party with G guests and notice that there is an odd number of guests! When planning the party you deliberately invited only couples and gave each couple a unique number C on their invitation. You would like to single out whoever came alone by asking all of the guests for their invitation numbers. The Input: The first line of input gives the number of cases, N. N test cases follow. For each test case there will be: One line containing the value G the number of guests. One line containing a space-separated list of G integers. Each integer C indicates the invitation code of a guest. Output For each test case, output one line containing "Case #x: " followed by the number C of the guest who is alone. The Limits: 1 = N = 50 0 < C = 2147483647 Small dataset 3 = G < 100 Large dataset 3 = G < 1000 Sample Input: 3 3 1 2147483647 2147483647 5 3 4 7 4 3 5 2 10 2 10 5 Sample Output: Case #1: 1 Case #2: 7 Case #3: 5 This is the solution that I came up with: with open('A-large-practice.in') as f: lines = f.readlines() with open('A-large-practice.out', 'w') as output: N = int(lines[0]) for testcase, i in enumerate(range(1,2*N,2)): G = int(lines[i]) for guest in range(G): codes = map(int, lines[i+1].split(' ')) alone = (c for c in codes if codes.count(c)==1) output.write("Case #%d: %d\n" % (testcase+1, alone.next())) It runs in 12 seconds on my machine with the large input. Now, my question is, can this solution be improved in Python to run in a shorter time or use less memory? The analysis of the problem gives some pointers on how to do this in Java and C++ but I can't translate those solutions back to Python.

    Read the article

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • What influences running time of reading a bunch of images?

    - by remi
    I have a program where I read a handful of tiny images (50000 images of size 32x32). I read them using OpenCV imread function, in a program like this: std::vector<std::string> imageList; // is initialized with full path to the 50K images for(string s : imageList) { cv::Mat m = cv::imread(s); } Sometimes, it will read the images in a few seconds. Sometimes, it takes a few minutes to do so. I run this program in GDB, with a breakpoint further away than the loop for reading images so it's not because I'm stuck in a breakpoint. The same "erratic" behaviour happens when I run the program out of GDB. The same "erratic" behaviour happens with program compiled with/without optimisation The same "erratic" behaviour happens while I have or not other programs running in background The images are always at the same place in the hard drive of my machine. I run the program on a Linux Suse distrib, compiled with gcc. So I am wondering what could affect the time of reading the images that much?

    Read the article

  • Capture data from jQuery Mobile GUI / form elements like sliders, or the flip switch/toggle, and store in a var?

    - by Carl Foggin
    I have this code, which executes without error, it shows "** slider change **" in the debug console every time the slider changes. But I cannot figure out how to capture the value of the slider so I can store it in a var. Can you help, I'm hoping it's a simple thing. $( '#page4_Options' ).live( 'pageinit', function(event){ var slideTime = userOptions.getSlideTime() / 1000; // userOptions is my Object to get/set params from localStorage. $("input[id='slider']").val(slideTime).slider("refresh"); // set default slide time when page init's. console.log("userOptions.getSlideTime()", userOptions.getSlideTime() ); $( "#slider" ).bind( "slidestop", function(event, ui) { console.log("** slider change **"); // How do I capture the new slider value into a var? }); }); Here's the HTML with the slider, it's in a tag: <div data-role="fieldcontain"> <label for="slider">Slide Duration (seconds):</label> <input type="range" name="slider" id="slider" value="2" min="0" max="60" /> </div>

    Read the article

  • Is there any seriously good reason why a view can not completely manage itself?

    - by mystify
    Example: I have an warning triangle icon, which is a UIImageView subclass. That warning is blended in with an animation, pulses for 3 seconds and then fades out. it always has a parent view! it's always only used this way: alloc, init, add as subview, kick off animation, when done:remove from superview So I want this: [WarningIcon warningAtPosition:CGPointMake(50.0f, 100.0f) parentView:self]; BANG! That's it. Call and forget. The view adds itself as subview to the parent, then does it's animations. And when done, it cuts itself off from the branch with [self removeFromSupeview];. Now some nerd a year ago told me: "Never cut yourself off from your own branch". In other words: A view should never ever remove itself from superview if it's no more referenced anywhere. I want to get it, really. WHY? Think about this: The hard way, I would do actually the exact same thing. Create an instance and hang me in as delegate, kick off the animation, and when the warning icon is done animating, it calls me back "hey man i'm done, get rid of me!" - so my delegate method is called with an pointer to that instance, and I do the exact same thing: [thatWarningIcon removeFromSuperview]; - BANG. Now I'd really love to know why this sucks. Life would be so easy.

    Read the article

  • How can I make PHP scripts timeout gracefully while waiting for long-running MySQL queries?

    - by Mark B
    I have a PHP site which runs quite a lot of database queries. With certain combinations of parameters, these queries can end up running for a long time, triggering an ugly timeout message. I want to replace this with a nice timeout message themed according to the rest of my site style. Anticipating the usual answers to this kind of question: "Optimise your queries so they don't run for so long" - I am logging long-running queries and optimising them, but I only know about these after a user has been affected. "Increase your PHP timeout setting (e.g. set_time_limit, max_execution_time) so that the long-running query can finish" - Sometimes the query can run for several minutes. I want to tell the user there's a problem before that (e.g. after 30 seconds). "Use register_tick_function to monitor how long scripts have been running" - This only gets executed between lines of code in my script. While the script is waiting for a response from the database, the tick function doesn't get called. In case it helps, the site is built using Drupal (with lots of customisation), and is running on a virtual dedicated Linux server on PHP 5.2 with MySQL 5.

    Read the article

  • C++ map performance - Linux (30 sec) vs Windows (30 mins) !!!

    - by sonofdelphi
    I need to process a list of files. The processing action should not be repeated for the same file. The code I am using for this is - using namespace std; vector<File*> gInputFileList; //Can contain duplicates, File has member sFilename map<string, File*> gProcessedFileList; //Using map to avoid linear search costs void processFile(File* pFile) { File* pProcessedFile = gProcessedFileList[pFile->sFilename]; if(pProcessedFile != NULL) return; //Already processed foo(pFile); //foo() is the action to do for each file gProcessedFileList[pFile->sFilename] = pFile; } void main() { size_t n= gInputFileList.size(); //Using array syntax (iterator syntax also gives identical performance) for(size_t i=0; i<n; i++){ processFile(gInputFileList[i]); } } The code works correctly, but... My problem is that when the input size is 1000, it takes 30 minutes - HALF AN HOUR - on Windows/Visual Studio 2008 Express (both Debug and Release builds). For the same input, it takes only 40 seconds to run on Linux/gcc! What could be the problem? The action foo() takes only a very short time to execute, when used separately. Should I be using something like vector::reserve for the map?

    Read the article

  • MS 70-536 .NET Framework Foundation - info on Exam? Especially regex?

    - by Sebastian P.R. Gingter
    Hi, I know there are already some questions about this, but not that specific. I have the self-paced training kit and worked through the test exam tool that is on the CD coming with the book. I constantly fail on the test tool, mostly on the regex questions. I'm not a regex guru. In fact my regex-fu is more than weak. I know what regex'es are, how I can use them and where my 'Regular expressions - kurz und gut' book is in my drawer in case I really need them. And to be honest I feel like learning regex is a total waste of time, because if I need them I have either a colleague that is fit and can do them in just a few seconds or I need my book and get them right in a still fair amount of time. And from my experience I can tell that I need regex like once in two or three years. So just putting in a lot of time into learning just the expressions to pass the exam is.. something I like not to have to do. Can you tell me something about the real exam vs. the test exam tool on the book and about the need to know regex for passing it? Thank you for your time. Marked as Community Wiki. Hope that fits?

    Read the article

  • long startup time...Need help

    - by Jeff
    My app is all done and working great. So now I ran it on a old iPhone and the app takes 17.3 seconds to start!?!? i spent a lot of time looking into it and i found that the reason it is taking so long to load is i have a lot of views and each view has a png background image. All my views and made in IB and in my code: #import "MyTestAppDelegate.h" #import "MyTestViewController.h" @implementation MyTestAppDelegate @synthesize window; @synthesize viewController; - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void)dealloc { [viewController release]; [window release]; [super dealloc]; } @end At the end of the code where is says: [window addSubview:viewController.view]; the app seems to be loading all the views in the nib at the same time. All the png's from all the views are about 12mb. There is no need for the app to load all the views at the same time during startup. Is there a way i can only load the first "home" view at startup? (All the views are part of the same nib.)

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • cURL/PHP Request Executes 50% of the Time

    - by makavelli
    After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL: * About to connect() to www.virginia.edu port 443 (#0) * Trying 128.143.22.36... * connected * Connected to www.virginia.edu (128.143.22.36) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac * Closing connection #0 If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally. I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.

    Read the article

  • Perl: Value of response code in HTTP::Request

    - by lola
    Hi all, So, I am writing a code to get a document from the internet. The document size is around 200 KB. This is the code: !/usr/local/bin/perl -w use strict; use LWP::UserAgent; my $ua = LWP::UserAgent->new; my $url = "SOME URL"; my $req = HTTP::Request->new(GET => $url); my $res = $ua->request($req); if($res->is_success){ print $res->content ."\n"; } else{ print "Error: " . $res->status_line; } Now, the only problem is I can't mention what the URL is. However, the output is: "Error: 500 read timeout". When I checked the link externally, the data is being downloaded in under 5 seconds. I even changed the timeout to 1000s, but it still didn't work. How should I go about finding more information related to the response. The size of the file (around 200KB) is also not too great to warrant a read timeout. The server is also not a busy one, didn't give a problem whenever I checked the link on the browser. Thanks.

    Read the article

  • how to call a js function from loaded jquery

    - by Y.G.J
    the function is in the page loading the ajax but i'm trying to call the function codes: [ajax] $.ajax({ type: "POST", url: "loginpersonal.asp", data: "id=<%=request("id")%>", beforeSend: function() { $("#personaltab").hide(); }, success: function(msg){ $("#personaltab").empty().append(msg); }, complete: function() { $("#personaltab").slideDown(); }, error: function() { $("#personaltab").append("error").slideDown(); } }); [the js function] function GetCount(t){ if(t>0) { total = t } else { total -=1; } amount=total; if(amount < 0){ startpersonalbid(); } else{ days=0;hours=0;mins=0;secs=0;out=""; days=Math.floor(amount/86400);//days amount=amount%86400; hours=Math.floor(amount/3600);//hours amount=amount%3600; mins=Math.floor(amount/60);//minutes amount=amount%60; secs=Math.floor(amount);//seconds if(days != 0){out += days +":";} if(days != 0 || hours != 0){out += hours +":";} if(days != 0 || hours != 0 || mins != 0){out += ((mins>=10)?mins:"0"+mins) +":";} out += ((secs>=10)?secs:"0"+secs) ; document.getElementById('countbox').innerHTML=out; setTimeout("GetCount()", 1000); } } window.onload=function(){ GetCount(<%= DateDiff("s", Now,privatesellstartdate&" "&privatesellstarttime ) %>); so at the end of the loginpersonal.asp from the ajax... if it does what it suppose to do... i'm trying to call the function GetCount() again.

    Read the article

  • Need help optimizing this Django aggregate query

    - by Chris Lawlor
    I have the following model class Plugin(models.Model): name = models.CharField(max_length=50) # more fields which represents a plugin that can be downloaded from my site. To track downloads, I have class Download(models.Model): plugin = models.ForiegnKey(Plugin) timestamp = models.DateTimeField(auto_now=True) So to build a view showing plugins sorted by downloads, I have the following query: # pbd is plugins by download - commented here to prevent scrolling pbd = Plugin.objects.annotate(dl_total=Count('download')).order_by('-dl_total') Which works, but is very slow. With only 1,000 plugins, the avg. response is 3.6 - 3.9 seconds (devserver with local PostgreSQL db), where a similar view with a much simpler query (sorting by plugin release date) takes 160 ms or so. I'm looking for suggestions on how to optimize this query. I'd really prefer that the query return Plugin objects (as opposed to using values) since I'm sharing the same template for the other views (Plugins by rating, Plugins by release date, etc.), so the template is expecting Plugin objects - plus I'm not sure how I would get things like the absolute_url without a reference to the plugin object. Or, is my whole approach doomed to failure? Is there a better way to track downloads? I ultimately want to provide users some nice download statistics for the plugins they've uploaded - like downloads per day/week/month. Will I have to calculate and cache Downloads at some point? EDIT: In my test dataset, there are somewhere between 10-20 Download instances per Plugin - in production I expect this number would be much higher for many of the plugins.

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • Android - update widget text

    - by david
    Hi, i have 2 questions about widgets update I have 2 buttons and i need to change one button text when i press the other one, how can i do this? The first time i open the widget it calls the onUpdate method, but it never calls it again. I need to update the widget every 2 seconds and i have this line in the xml. android:updatePeriodMillis="2000" Do i need a service or should it works just with the updatePeriodMillis tag? onUpdate method RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.newswidget); Intent intent = new Intent(context, DetalleConsulta.class); intent.putExtra(DetalleConsulta.CONSULTA_ID_NAME, "3"); PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0); views.setOnClickPendingIntent(R.id.btNews, pendingIntent); /* Inicializa variables para llamar el controlador */ this.imei = ((TelephonyManager)context.getSystemService(Context.TELEPHONY_SERVICE)).getDeviceId(); this.controlador = new Controlador(this.imei); try { this.respuestas = this.controlador.recuperarNuevasRespuestas(); if(this.respuestas != null &amp;&amp; this.respuestas.size() &gt; 0){ Iterator&lt;Consulta&gt; iterRespuestas = this.respuestas.iterator(); views.setTextViewText(R.id.btNews, ((Consulta)iterRespuestas.next()).getRespuesta()); } } catch (PersistenciaException e) { //TODO manejar error } appWidgetManager.updateAppWidget(appWidgetIds, views); thx a lot!!!

    Read the article

  • Can I get away with this or is it just too crude and unpractical ?

    - by The_AlienCoder
    I spent the whole of last night searching for a free AspNet web chat control that I could simply drag into my website. Well the search was in vain as I could not find a control that matched my needs i.e List of users, 1 to 1 chat, Ability to kick out users.. In the end I decided to create my own control from scractch. Although it works well on my machine Im concerned that It maybe a little crude and unpractical on a shared hosting enviroment. Basically this is what I did : Created an sql database that stores the chat messages. Wrote the stored procedures and and included a statement that clears old messages Then the 'crude' part : Dragged an update panel and timer control on my page Dragged a Repeater databound to the chat messages table inside the update panel Dragged another update panel and inside it put a textbox and a button Configured the timer control to tick every 5 seconds. ..and then I made it all work like this In the timer tick event I 'refreshed' the messages display by invoking Databind() on my repeater i.e protected void Timer1_Tick(object sender, EventArgs e) { MyRepeater.DataBind(); } Then in my send button click event protected void btnSend_Click(object sender, EventArgs e) { MyDataLayer.InsertMessage(Message, Sender, CurrTime); } Well It works well on my machine and Ive got the other functionalities(users list, kick out user..) to work by simply creating more tables. But like I said it seems a little crude to me. so I need a proffesional opinion. Should I run with this or try another Approach ?

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • SQL Server insert performance with and without primary key

    - by Eric
    Summary: I have a table populated via the following: insert into the_table (...) select ... from some_other_table Running the above query with no primary key on the_table is ~15x faster than running it with a primary key, and I don't understand why. The details: I think this is best explained through code examples. I have a table: create table the_table ( a int not null, b smallint not null, c tinyint not null ); If I add a primary key, this insert query is terribly slow: alter table the_table add constraint PK_the_table primary key(a, b); -- Inserting ~880,000 rows insert into the_table (a,b,c) select a,b,c from some_view; Without the primary key, the same insert query is about 15x faster. However, after populating the_table without a primary key, I can add the primary key constraint and that only takes a few seconds. This one really makes no sense to me. More info: The estimated execution plan shows 0% total query time spent on the clustered index insert SQL Server 2008 R2 Developer edition, 10.50.1600 Any ideas?

    Read the article

  • SQL Server Connection Timeout C#

    - by Termin8tor
    First off I'd like to let everyone know I have searched my particular problem and can't seem to find what's causing my problem. I have an SQL Server 2008 instance running on a network machine and a client I have written connecting to it. To connect I have a small segment of code that establishes a connection to an sql server 2008 instance and returns a DataTable populated with the results of whatever query I run against the server, all pretty standard stuff really. Anyway the issue is, whenever I open my program and call this method, upon the first call to my method, regardless as to what I've set my Connection Timeout value as in the connection string, it takes about 15 seconds and then times out. Bizarrely though the second or third call I make to the method will work without a problem. I have opened up the ports for SQL Server on the server machine as outlined in this article: How to Open firewall ports for SQL Server and verified that it is correctly configured. Can anyone see a particular problem in my code? string _connectionString = "Server=" + @Properties.Settings.Default.sqlServer + "; Initial Catalog=" + @Properties.Settings.Default.sqlInitialCatalog + ";User Id=" + @Properties.Settings.Default.sqlUsername + ";Password=" + @Properties.Settings.Default.sqlPassword + "; Connection Timeout=1"; private DataTable ExecuteSqlStatement(string command) { using (SqlConnection conn = new SqlConnection(_connectionString)) { try { conn.Open(); using (SqlDataAdapter adaptor = new SqlDataAdapter(command, conn)) { DataTable table = new DataTable(); adaptor.Fill(table); return table; } } catch (SqlException e) { throw e; } } } The SqlException that is caught at my catch is : "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." This occurs at the conn.Open(); line in the code snippet I have included. If anyone has any ideas that'd be great!

    Read the article

  • how to transfer a time which was zero at year of 0000(maybe) to java.util.Date

    - by hguser
    I have a gps time in the database,and when I do some query,I have to use the java.util.Date,however I found that I do not know how to change the gps time to java.util.Date. Here is a example: The readable time === The GPS time 2010-11-15 13:10:00 === 634254192000000000 2010-11-15 14:10:00 === 634254228000000000 The period of the two date is "36000000000",,obviously it stands for one hour,so I think the unit of the gps time in the db must be nanosecond. 1 hour =3600 seconds= 3600*1000 milliseconds == 3600*1000*10000 nanoseconds Then I try to convert the gps time: Take the " 634254228000000000" as example,it stands for("2010-11-15 14:10:00"); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ssZ"); Date d = new Date(63425422800000L); System.out.println(sdf.format(d)); The result is 3979-11-15 13:00:00+0000. Of course it is wrong,then I try to calculate : 63425422800000/3600000/24/365=2011.xxx So it seems that the gps time here is not calcuated from Epoch(1970-01-01 00:00:00+0000). It maybe something like (0001-01-01 00:00:00+0000). Then I try to use the following method: Date date_0=sdf.parse("0001-01-01 00:00:00+0000"); Date d = new Date(63425422800000L); System.out.println(sdf.format(d.getTime() + date_0.getTime())); The result is: 2010-11-13 13:00:00+0000. :( Now I am confusing about how to calculate this gps time. Any suggestion?

    Read the article

  • Which other event handler than click could i use?

    - by Gaelle
    I have a jQuery question, and I really think it is a silly one : i'm a beginner in JS and jQuery... I'm using $("#myLink").click(function(){ $(".myClassToShow").show(); $(".myClassToHide").hide(); }); to hide elements with class myClassToHide as a class attribute and show elements with class myClassToShow as a class attribute. I think this is really easy to understand :) I didn't think it would hide every elements with the good class, but, well, it works. My worry here is that my elements show and hide only for few seconds : the time my mouse click on the link. I would like to make myClassToShow elements remaining on the screen, when i already clicked my link, and myClassToHide elements really hide. For example, on the johann Hammarstrom's website, when you click on "Print", all his works which are not print gone hide, and only the printing one remain. That's kinda what i want. I searched using Firebug, but couldn't find which kind of event he used. I know a onchange is not the correct answer, so what? Could you help me please?

    Read the article

  • AJAX Uploading - Not waiting for response before continuing

    - by waxical
    I'm using Blueimp's jQuery Uploader (very good it is too btw) and an S3 handler to upload files and then transfer them to S3 via the S3 API (from the PHP SDK). It works. The problem is, on large files (1GB) it can take anything up to a a few minutes to transfer (via create-object) onto S3. The PHP file that does this is hung-up until this process is complete. The problem is, the uploader (which utilises the jQuery Ajax method) seems to give up waiting and start again everytime. I have thought this was related to PHP INI 'max_input_time' or such, as it seemed to wait around 60 seconds, though this now appears to vary. I have upped the max_input_time in PHP INI and others related - but no further. I've also considered (the more likely) that JS, either in the script or the jQuery method has a timeout. The developer (blueimp) has said there's no such timeout in the front-end script, nor have I seen any and though 'timeout' is referenced in the jQuery Ajax method options, it seems to affect the entire time it uploads rather than the wait for a response - so that's not much use. Any help or guidance gratefully received.

    Read the article

  • Why does my binding break down on SilverLight ProgressBars?

    - by Bill Jeeves
    I asked a similar question about charts but I have given up on that and I am using progress bars instead. Essentially, I have ten progress bars in a Silverlight control. Each is showing a different value and updating every couple of seconds (it's a process monitor). Each progress bar has the same minimum value and maximum value so the bars can be compared. Trying to follow the M-V-VM model, I have bound the value of each bar to a property in my ViewModel. All of the maximum values for the bar are bound to a single property. When the model updates, the values and the maximum can all update. This allows the bars to re-scale as the sizes grow. I'm finding that the binding will stop working sometimes on one or more bars. I suspect it is because a bar's value becomes higher than the maximum occasionally. This is because if I update the maximums first and they are going down, the values will be to high. If I update the values first when the maximum needs increasing, the values are too high again. Is there a way to stop this behaviour? Some way, perhaps, to tell the progress bars that it's OK to temporarily go too high? Or some way to tell the bindings that they shouldn't be disabled when this happens? Or maybe I've got this completely wrong and there's some other issue with ProgressBar binding I don't know about?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >