Search Results

Search found 6231 results on 250 pages for 'slow diver'.

Page 171/250 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • SQL Server Express 2008 Stored Procedure execution time spikes periodically

    - by user156241
    I have a big stored procedure on a SQL Server 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too. It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences. I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server. Also, The recovery options for all the databases are set to "Simple".

    Read the article

  • How does an interpreter switch scope?

    - by Dox
    I'm asking this because I'm relatively new to interpreter development and I wanted to know some basic concepts before reinventing the wheel. I thought of the values of all variables stored in an array which makes the current scope, upon entering a function the array is swapped and the original array put on some sort of stack. When leaving the function the top element of the "scope stack" is popped of and used again. Is this basically right? Isn't swapping arrays (which means moving around a lot of data) not very slow and therefore not used by modern interpreters?

    Read the article

  • How can I enforce Eclipse to use Sun Java?

    - by Dan
    Hi Before installing Eclipse I had Open JDK on default. Now I changed it to Sun Java. I did as Eclipse Helios was running really slow, unfortunately it is still... Do you have any ideas how to enforce it to use Java Sun? I could reinstal it however I have already Android SDK installed so I would have to do all the process again, after all thats not the correct way of solving problem I think. I'm using Ubuntu 10.10. java -version java version "1.6.0_22" Java(TM) SE Runtime Environment (build1.6.0_22-b04) Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03, mixed mode) Would be grateful for any help. Best, Daniel

    Read the article

  • iphone indexed table view problem

    - by steveY
    I have a table view in which I'm using sectionIndexTitlesForTableView to display an index. However, when I scroll the table, the index scrolls with it. This also results in very slow refreshing of the table. Is there something obvious I could be doing wrong? I want the index to remain in place on the right while the table scrolls. This is the code I'm using for the index titles: - (NSArray *)sectionIndexTitlesForTableView:(UITableView *)tableView { NSMutableArray *tempArray = [[NSMutableArray alloc] init]; [tempArray addObject:@"A"]; [tempArray addObject:@"B"]; [tempArray addObject:@"C"]; [tempArray addObject:@"D"]; ... return tempArray; }

    Read the article

  • Spawning and waiting for child processes in Python

    - by Brendan Long
    The relevant part of the code looks like this: pids = [] for size in SIZES: pids.append(os.spawnv(os.P_NOWAIT, RESIZECMD, [RESIZECMD, lotsOfOptions])) # Wait for all spawned imagemagick processes to finish while pids: (pid, status) = os.waitpid(0, 0) if pid: pids.remove(pid) What this should be doing is spawning all of the processes off, then waiting for each process to finish before continuing. What it does is work for the most part but sometimes crash on the next section (when it expects all of these processes to be finished). Is there something wrong with this? Is there a better way of doing it? The environment it has to work on is CentOS with Python 2.4, but I'm testing on Cygwin with Python 2.5, so it could be that it fails on my machine but will work on the Linux one (the Linux machine is very slow and this error is rare, so I haven't been able to get it on there).

    Read the article

  • C++: Best text accumulator

    - by MInner
    Text gets accumulates piecemeal before being sent to client. Now we use own class that allocates memory for each piece as char massive. (Anyway, works like char[][] + std::list<char*>). Then we build the whole string, convert it into std::sting and then create boost::asio::streambuf using it. That's slow enough, I assume. Correct me if I'm wrong. I know, in many cases simple FILE type from stdio.h is used. How does it works? Allocates memory at every write into it. So, is it faster and is there any way to read into boost::asio::streambuf from FILE?

    Read the article

  • Using indexes on/through a MySQL view

    - by Peeja
    We've got a MySQL table in which rows are never updated, but instead new rows are added and the old ones marked obsolete. Think Rails' acts_as_paranoid, but for every update. To make working with Rails sane, we've got a view which selects only the rows which are "current". That makes a much better "table" for our ActiveRecord model. The snag: our indexes aren't being used anymore. Queries on the view don't use the underlying tables' indexes. You can't add an index to a view. Without indexes, the app is unbearably slow. The only solution we've come up with is to build a materialized view, but that's a pain in MySQL because they're not natively supported. Is there a better way to do this?

    Read the article

  • Aggregating and Displaying Multiple Feeds

    - by Keith
    I want to pull feeds for multiple online services (e.g. Tumblr, Google Reader, Delicious) and aggregate them into a single feed to display on my site. I know of services like YQL or Yahoo! Pipes which will combine feeds, but sometimes those services are too slow. I was wondering what the best method would be if I wanted to run this on my own server (using JavaScript or PHP)? Ideally, I would cache the results to cut down on processing.

    Read the article

  • POS Desktop Application using DB or Localfiles ? using WPF

    - by Panindra
    I am planning to build a POS Application for my shop. I have enough knowledge to build the application using DB and also using local files( system.IO - binary files ) to store and access the data for my application. But , i have no deployment experience and confused in choosing data storing option. Database using MDF may be good option ( may ease plenty of coding ) but i don't want to have SQL server on my desktop. as i am using WPF for building , my concern is that my application may get slow due to server response and design rendering of WPF. Then i tried to use only local data (binary files) to store the data and retrive using class and objects. but this coding is taking lot of time , so in the middle of the process i struck in the dilemma of going back to Database . Please help , for performance wise whic one is better . and in Practical World ,in professional applications which one is widely using .. please give suggestions ..

    Read the article

  • Multithreaded FTP upload. Is it possible?

    - by Arty
    I need to upload multiple files from directory to the server via FTP and SFTP. I've solved this task for SFTP with python, paramiko and threading. But I have problem with doing it for FTP. I tried to use ftplib for python, but it seems that it doesn't support threading and I upload all files one by one, which is very slow. I'm wondering is it even possible to do multithreading uploads with FTP protocol without creating separate connections/authorizations (it takes too long)? Solution can be on Python or PHP. Maybe CURL? Would be grateful for any ideas.

    Read the article

  • GVim highlighting with matchadd eventually slows down?

    - by Kyle MacFarlane
    I have the following in ~/.vim/ftplugin/python.vim to highlight long lines, accidental tabs and extra whitespace in Python files: hi CustomPythonErrors ctermbg=red ctermfg=white guibg=#592929 au BufWinEnter *.py call matchadd('CustomPythonErrors', '\%>80v.\+', -1) au BufWinEnter *.py call matchadd('CustomPythonErrors', '/^\t\+/', -1) au BufWinEnter *.py call matchadd('CustomPythonErrors', '\s\+$', -1) au BufWinLeave *.py call clearmatches() The BufWinLeave is so that the matches are cleared when I switch to another file in case that file isn't a .py file. It's an essential feature for me when working with something like Django. It all works fine for random amounts of time; from ten minutes to hours (my guess is it depends on how many files I open/close). But eventually when any line over 80 characters is displayed GVim slows to a halt and requires a restart. Does anyone have any ideas why this would eventually slow down?

    Read the article

  • Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a common-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? Can PEAR interface with the MySqli prepared statements I currently use? Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) Will it slow down my site once I have a significantly large member base?

    Read the article

  • C++: Platform independent game lib?

    - by Martijn Courteaux
    Hi, I want to write a serious 2D game, and it would be nice if I have a version for Linux and one for Windows (and eventually OSX). Java is fantastic because it is platform independent. But Java is too slow to write a serious game. So, I thought to write it in C++. But C++ isn't very cross-platform friendly. I can find game libraries for Windows and libraries for Linux, but I'm searching one that I can use for both, by recompiling the source on a Windows platform and on a Linux platform. Are there engines for this or is this idea irrelevant? Isn't it that easy (recompiling)? Any advice and information about C++ libraries would be very very very appreciated!

    Read the article

  • "Ambiguous template specialization" problem

    - by Setien
    I'm currently porting a heap of code that has previously only been compiled with Visual Studio 2008. In this code, there's an arrangement like this: template <typename T> T convert( const char * s ) { // slow catch-all std::istringstream is( s ); T ret; is >> ret; return ret; } template <> inline int convert<int>( const char * s ) { return (int)atoi( s ); } Generally, there are a lot of specializations of the templated function with different return types that are invoked like this: int i = convert<int>( szInt ); The problem is, that these template specializations result in "Ambiguous template specialization". If it was something besides the return type that differentiated these function specializations, I could obviously just use overloads, but that's not an option. How do I solve this without having to change all the places the convert functions are called?

    Read the article

  • Alternatives to java on android

    - by user84584
    Hello guys, I just got myself an android phone and I'm dying to start coding on it ! However I'm not a big java fan, although I can live with that, I would like to know if there're reasonable alternatives for the android virtual machine. I've done a medium sized project using clojure, however from the reviews I read, it's very slow when running on android. How about scala ? I read that some people did experiments with it in android, is it "fast enough" ? How big is the learning curve ? Cheers, Ze Maria

    Read the article

  • java: speed up reading foreign characters

    - by Yang
    My current code needs to read foreign characters from the web, currently my solution works but it is very slow, since it read char by char using InputStreamReader. Is there anyway to speed it up and also get the job done? // Pull content stream from response HttpEntity entity = response.getEntity(); InputStream inputStream = entity.getContent(); StringBuilder contents = new StringBuilder(); int ch; InputStreamReader isr = new InputStreamReader(inputStream, "gb2312"); // FileInputStream file = new InputStream(is); while( (ch = isr.read()) != -1) contents.append((char)ch); String encode = isr.getEncoding(); return contents.toString();

    Read the article

  • KMeans clustering for more than 5 million vectors

    - by Wajih
    I have hit a real problem. I need to do some Kmeans clustering for 5 million vectors, each containing about 32 cols. I tried out Mahout which requires linux and I am on windows, I am restrained from using a Linux OS and any sort of simulator. Can anyone suggest a KMeans clustering algorithm that is scalable upto 5M vectors and can converge quickly? I have tested a few but they wont scale. Which means they are slow and take forever to complete. Thanks

    Read the article

  • Storing object as a column in LINQ

    - by Alex
    Hello, i have some class which constructs itself from string, like this: CurrencyVector v = new CurrencyVector("10 WMR / 20 WMZ"); it's actually a class which holds multiple currency values, but it does not matter much. I need to change type of column in my LINQ table (in vs 2010 designer) from String to that class, CurrencyVector. If i do it - i get runtime error when LINQ runtime tries to cast String as CurrencyVector (when populating the table from database). Adding IConvertible did not help. I wrapped these columns in properties, but it's ugly and slow solution. Searching internet gave no results.

    Read the article

  • Performance Overhead of Perf Event Subsystem in Linux Kernel

    - by Bo Xiao
    Performance counters for Linux are a new kernel-based subsystem that provide a framework for all things performance analysis. It covers hardware level (CPU/PMU, Performance Monitoring Unit) features and software features (software counters, tracepoints) as well. Since 2.6.33, the kernel provide 'perf_event_create_kernel_counter' kernel api for developers to create kernel counter to collect system runtime information. What I concern most is the performance impact on overall system when tracepoint/ftrace is enabled. There are no docs I can find about them. I was once told that ftrace was implemented by dynamically patching code, will it slow the system dramatically?

    Read the article

  • Optimizing Mysql to avoid redundancy but still have fast access to calculable data

    - by diglettpotato
    An example for the sake of the question: I have a database which contains users, questions, and answers. Each user has a score which can be calculated using the data from the questions and answers tables. Therefore if I had a score field in the users table, it would be redundant. However, if I don't use a score field, then calculating the score every time would significantly slow down the website. My current solution is to put it in a score field, and then have a cron running every few hours which recalculates everybody's score, and updates the field. Is there a better way to handle this?

    Read the article

  • How can two threads access a common array of buffers with minimal blocking ? (c#)

    - by Jelly Amma
    Hello, I'm working on an image processing application where I have two threads on top of my main thread: 1 - CameraThread that captures images from the webcam and writes them into a buffer 2 - ImageProcessingThread that takes the latest image from that buffer for filtering. The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image. My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa. What sort of locking mechanism would be the most appropriate for this to be thread-safe ? I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering. Thanks in advance for any idea or advice. J.

    Read the article

  • BULK SMS, Long Codes (VMN MSIDN), T-mobile?

    - by John
    Does any US wireless carrier offer individuals or companies with a direct connection to the SMSC? The number is 747-772-3101 (repalce 7's with 6's) This number is registered to t-mobile, also verified by t-mobile to be a valid subscriber sending 160,000+ text messages monthly and that all they have is an unlimited text messaging plan on top of the cheapest voice plan. This company of the number verified to me that they don't use gsm modems as they are too slow. So I know it's possible but who would I contact, Sales or anyone else reachable through a 1-800 is ignorant to these services and developer.t-mobile is worthless and doesn't reply to emails. Any info??

    Read the article

  • Optimizing BeautifulSoup (Python) code

    - by user283405
    I have code that uses the BeautifulSoup library for parsing, but it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me with this? I am using BeautifulSoup for parsing and than save into a DB. If I comment out the save statement, it still takes a long time, so there is no problem with the database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • classic asp & .net 2 site not working on windows 7

    - by alexander2116
    I am receiving the following error message: An error occurred on the server when processing the URL. Please contact the system administrator. If you are the system administrator please click here to find out more about this error. I have my site in the inetpub directory in a subfolder called website. I have also gone to add/remove windows compononents and had asp installed. In iss manager I have asp listed with defult settings. The initial website page is a classic asp page Has anyone else encountered this issue? Please help! I'm having to develop through vpn/remote desktop combo which is painfully slow!! thanks so much for anyone who can help!

    Read the article

  • Create dummy index.html inside a new MKDR directory

    - by jonnypixel
    Hi, I know this may be a silly question but i cant seem to find just a simple answer. I have a php script that makes a directory for me when the user starts a new entry. That directory holds photos for their gallery. What i would like to do is also create One index.html file inside that new directory with a few lines of html code in it. How do i do this? Im guessing that the file would be made like so: mkdir('users/'.$id.'/index.html',0755); But how do i add the html into that index.html file? Or do i have one file on the server and copy it over into there during the MKDIR process? Anyways a really simple answer would be best as i am very slow in this learning thing. Thank you John

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >