Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 271/1148 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Programmatically query route planner for travel time/distance?

    - by Rich
    Hi I would like to achieve something whereby I have a spreadsheet such that the columns are: Column A - place name Column B - place name Column C - distance by road between places in columns A and B Column D - travel time by road between places in columns A and B I thought it might be possible using Google Docs' spreadsheet and its 'Google' functions, but I've not found any that might do the trick. In the end I could knock up an app to do it using the Google Maps API but would rather avoid it if I can. Thanks in advance for any suggestions. Rich

    Read the article

  • xcopy Not Surpressing File/Directory Query

    - by Daniel Bingham
    Hey folks, I'm attempting to use xcopy to copy over a file from one machine to another on our network as part of a Java program. I'm calling xcopy like this: xcopy "C:\Program Files\path\to\my\file" "\\othermachine\c$\Documents and Settings\<myUserName>\Desktop\Test\path\in\directory\structure\to\file" /e /y /i Because I'm calling it from with in Java, I need all the prompts to be suppressed. For the most part, \i and \y have done exactly that. However, for this one file /i fails and I get the file or directory prompt. The result is that it hangs the entire program. I've also tried calling it with /s /t /q appended on to the existing options, to no avail. Why isn't /i working to suppress the File or Directory prompt? Is there an order I need to call the options in? Is there something else I need to do? EDIT: I should mention, the file is a text file - single line of text. It does not have an extension. It looks like this: FILE-NAME

    Read the article

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • Querying a 3rd party website's database from my website

    - by Mong134
    The Goal: To retrieve information from a 3rd party database based off of a user's query on my ASP.NET website The Details: I need to be able to search 3rd-party websites for information relating to pharmaceutical drugs. Basically, here's what I've been tasked with: a user starts entering the name of a drug they're using in their experiments, and while they're typing a 3rd party website (e.g., here or here) is queried and suggestions are made based based off of what they've typed. Once they've made a selection, certain properties (molecular weight, chemical structure, etc) are retrieved from the 3rd party database and stored in our database. PharmaGKB.org's search bar is pretty much what I need to implement, but I need to access a 3rd party db. The site that I'm working on is ASP.NET/C#. The Problem: I don't really know where to start with this. There's a downloadable Perl example at the bottom of the page here, but it didn't really help me all that much. I'm at a loss as to how to implement this, or even find information about how to do it. The AJAX toolkit was suggested, but I'm not sure if that will solve the issue. JavaScript is also being considered, but again, I'm not sure if that will be sufficient, either. Perl Example Connection As a mentioned, here is a snippet from the Perl example given on the Pharmgkb.org site: my $call = SOAP::Lite -> readable (1) -> uri('SearchService') -> proxy('http://www.pharmgkb.org/services/SearchService') -> search ($ARGV[0]); However, I'm not sure how to implement this is C#/ASP.NET/JavaScript. There's a question on Stack Overflow about embedding Perl in C#, but it require a C wrapper as well, and I don't think that three languages is necessary or wise to solve this issue.

    Read the article

  • Horrible eclipse performance on macbook pro running 10.5.8

    - by user246114
    Hi I am using eclipse galileo on my macbook pro. After a few minutes it starts dragging really badly, like it takes 8 seconds to open a file. I don't have many files open at all. I already modified the config file to increase ram and all that stuff. Is there something wrong with this version of eclipse, never had it run so poorly on here, Thanks

    Read the article

  • Horrible eclipse performance on macbook pro running 10.5.8

    - by user246114
    Hi I am using eclipse galileo on my macbook pro. After a few minutes it starts dragging really badly, like it takes 8 seconds to open a file. I don't have many files open at all. I already modified the config file to increase ram and all that stuff. Is there something wrong with this version of eclipse, never had it run so poorly on here, Thanks

    Read the article

  • Vb.exe performance time

    - by vinodacharyabva
    Hi I am running a vb.exe through automation. In exe I have return a code which takes a data from database and saves that data into file. I ran that .exe for the first time. It took 1 mins. For testing baseline I called same .exe 5 times one after the other. But it took nearly 10 mins to generate. My question is if it takes 1 min for 1 report to generate then it should take 5 mins to generate 5 report but why it is taking 10 mins (more than the double). Is there any problem while calling a exe one after the other?

    Read the article

  • MonoTouch - foreach vs for loops (performance)

    - by ifwdev
    Normally I'm well aware that a consideration like this is premature optimization. Right now I have some event handlers being attached inside a foreach loop. I am wondering if this style might be prone to leaks or inefficient memory use due to closures being created. Is there any validity to this thinking?

    Read the article

  • Improve performance of sorting files by extension

    - by DxCK
    With a given array of file names, the most simpliest way to sort it by file extension is like this: Array.Sort(fileNames, (x, y) => Path.GetExtension(x).CompareTo(Path.GetExtension(y))); The problem is that on very long list (~800k) it takes very long to sort, while sorting by the whole file name is faster for a couple of seconds! Theoretical, there is a way to optimize it: instead of using Path.GetExtension() and compare the newly created extension-only-strings, we can provide a Comparison than compares starting from the LastIndexOf('.') without creating new strings. Now, suppose i found the LastIndexOf('.'), i want to reuse native .NET's StringComparer and apply it only to the part on string after the LastIndexOf('.'). Didn't found a way to do that. Any ideas?

    Read the article

  • optimizing any OS for maximum informix client/server performance

    - by Frank Developer
    Is there any Informix documentation for optimizing any operating system where an ifx engine is running? For example, in Linux, strip-down to a bare minimum all unnecessary binaries, daemons, utilities, tune kernel parameters, optimize raw and cooked devices (hdparm), place swap space on beginning tracks of a disk, etc. Someday, maybe, Informix can create its own proprietary and dedicated PICK-like O/S to provide the most optimized environment for a standalone ifx server? The general idea is for the OS where ifx sits on have the smallest footprint and lowest overhead impact.

    Read the article

  • vectorizing loops in Matlab - performance issues

    - by Gacek
    This question is related to these two: http://stackoverflow.com/questions/2867901/introduction-to-vectorizing-in-matlab-any-good-tutorials http://stackoverflow.com/questions/2561617/filter-that-uses-elements-from-two-arrays-at-the-same-time Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time. I've rewritten this: function B = bfltGray(A,w,sigma_r) dim = size(A); B = zeros(dim); for i = 1:dim(1) for j = 1:dim(2) % Extract local region. iMin = max(i-w,1); iMax = min(i+w,dim(1)); jMin = max(j-w,1); jMax = min(j+w,dim(2)); I = A(iMin:iMax,jMin:jMax); % Compute Gaussian intensity weights. F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2); B(i,j) = sum(F(:).*I(:))/sum(F(:)); end end into this: function B = rngVect(A, w, sigma) W = 2*w+1; I = padarray(A, [w,w],'symmetric'); I = im2col(I, [W,W]); H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2); B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []); But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems. I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer. I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available). So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.

    Read the article

  • Cursor returns zero rows from query to table

    - by brockoli
    I've created an SQLiteDatabase in my app and populated it with some data. I can connect to my AVD with a terminal and when I issue select * from articles; I get a list of all the rows in my table and everything looks fine. However, in my code when I query my table, I get a cursor back that has my tables columns, but zero rows of data. Here is my code.. mDbHelper.open(); Cursor articles = mDbHelper.fetchAllArticles(); startManagingCursor(articles); Cursor feeds = mDbHelper.fetchAllFeeds(); startManagingCursor(feeds); mDbHelper.close(); int titleColumn = articles.getColumnIndex("title"); int feedIdColumn = articles.getColumnIndex("feed_id"); int feedTitleColumn = feeds.getColumnIndex("title"); /* Check if our result was valid. */ if (articles != null) { int count = articles.getCount(); /* Check if at least one Result was returned. */ if (articles.moveToFirst()) { In the above code, my Cursor articles returns with my 4 columns, but when I call getCount() it returns zero, even though I can see hundreds of rows of data in that table from command line. Any idea what I might be doing wrong here? Also.. here is my code for fetchAllArticles.. public Cursor fetchAllArticles() { return mDb.query(ARTICLES_TABLE, new String[] {ARTICLE_KEY_ROWID, ARTICLE_KEY_FEED_ID, ARTICLE_KEY_TITLE, ARTICLE_KEY_URL}, null, null, null, null, null); } Rob W.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • Commercial web application--scalable database design

    - by Rob Campbell
    I'm designing a set of web apps to track scientific laboratory data. Each laboratory has several members, each of whom will access both their own data and that of their laboratory as a whole. Many typical queries will thus be expected to return records of multiple members (e.g. my mouse, joe's mouse and sally's mouse). I think I have the database fairly well normalized. I'm now wondering how to ensure that users can efficiently access both their own data and their lab's data set when it is mixed among (hopefully) a whole ton of records from other labs. What I've come up with so far is that most tables will end with two fields: user_id and labgroup_id. The WHERE clause of any SELECT statement will include the appropriate reference to one of the id fields ("...WHERE 'labroup_id=n..." or "...WHERE user_id=n..."). My questions are: Is this an approach that will scale to 10^6 or more records? If so, what's the best way to use these fields in a query so that it most efficiently searches the relevant subset of the database? e.g. Should the first step in querying be to create a temporary table containing just the labgroup's data? Or will indexing using some combination of the id, user_id, and labroup_id fields be sufficient at that scale? I thank any responders very much in advance.

    Read the article

  • Getting the most recent post based on date

    - by camcim
    Hi guys, How do I go about displaying the most recent post when I have two tables, both containing a column called creation_date This would be simple if all I had to do was get the most recent post based on posts created_on value however if a post contains replies I need to factor this into the equation. If a post has a more recent reply I want to get the replies created_on value but also get the posts post_id and subject. The posts table structure: CREATE TABLE `posts` ( `post_id` bigint(20) unsigned NOT NULL auto_increment, `cat_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `subject` tinytext NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `status` varchar(10) NOT NULL default 'INACTIVE', `private_post` varchar(10) NOT NULL default 'PUBLIC', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`post_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; The replies table structure: CREATE TABLE `replies` ( `reply_id` bigint(20) unsigned NOT NULL auto_increment, `post_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `notify` varchar(5) NOT NULL default 'YES', `status` varchar(10) NOT NULL default 'INACTIVE', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`reply_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; Here is my query so far. I've removed my attempt of extracting the dates. $strQuery = "SELECT posts.post_id, posts.created_on, replies.created_on, posts.subject "; $strQuery = $strQuery."FROM posts ,replies "; $strQuery = $strQuery."WHERE posts.post_id = replies.post_id "; $strQuery = $strQuery."AND posts.cat_id = '".$row->cat_id."'";

    Read the article

  • MySQL Need some help with a query

    - by Jules
    I'm trying to fix some data by adding a new field. I have a backup from a few months ago and I have restored this database to my server. I'm looking at table called pads, its primary key is PadID and the field of importance is called RemoveMeDate. In my restored (older) database there is less records with an actual date set in RemoveMeDate. My control date is 2001-01-01 00:00:00 meaning that the record is not hidden aka visible. What I need to do is select all the records from the older database / table with the control date and join with those from the newer db /table where the control date is not set. I hope I've explained that correctly. I'll try again, with numbers. I have 80,000 visible records in the older table (with control date set) and 30,000 in the newer db/table. I need to select the 50,000 from the old database, to perform an update query. Heres my query, which I'd can't get to work as I'd like. jules-fix-reasons is the old database, jules is the newer one. select p.padid from `jules-fix-reasons`.`pads` p JOIN `jules`.`pads` ON p.padid = `jules`.`pads`.`PadID` where p.RemoveMeDate <> '2001-01-01 00:00:00' AND `jules`.`pads`.RemoveMeDate = '2001-01-01 00:00:00'

    Read the article

  • Mysql regexp performance question

    - by Tim
    Rumour has it that this; SELECT * FROM lineage_string where lineage like '%179%' and lineage regexp '(^|/)179(/|$)' Would be faster than this; SELECT * FROM lineage_string where lineage regexp '(^|/)179(/|$)' Can anyone confirm ? Or know a decent way to test the speed of such queries. Thanks

    Read the article

  • JavaScript tags, performance and W3C

    - by Thomas
    Today I was looking for website optimization content and I found an article talking about move JavaScript scripts to the bottom of the HTML page. Is this valid with W3C's recommendations? I learned that all JavaScript must be inside of head tag... Thank you.

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • list or container O(1)-ish insertion/deletion performance, with array semantics

    - by Chris Kaminski
    I'm looking for a collection that offers list semantics, but also allows array semantics. Say I have a list with the following items: apple orange carrot pear then my container array would: container[0] == apple container[1] == orangle container[2] == carrot Then say I delete the orange element: container[0] == apple container[1] == carrot I don't particularly care if sort order is maintained, I'd just like the array values to function as accelerators to the list items, and I want to collapse gaps in the array without having to do an explicit resizing.

    Read the article

  • Combinationally unique MySQL tables

    - by Jack Webb-Heller
    So, here's the problem (it's probably an easy one :P) This is my table structure: CREATE TABLE `users_awards` ( `user_id` int(11) NOT NULL, `award_id` int(11) NOT NULL, `duplicate` int(11) NOT NULL DEFAULT '0', UNIQUE KEY `award_id` (`award_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 So it's for a user awards system. I don't want my users to be granted the same award multiple times, which is why I have a 'duplicate' field. The query I'm trying is this (with sample data of 3 and 2) : INSERT INTO users_awards (user_id, award_id) VALUES ('3','2') ON DUPLICATE KEY UPDATE duplicate=duplicate+1 So my MySQL is a little rusty, but I set user_id to be a primary key, and award_id to be a UNIQUE key. This (kind of) created the desired effect. When user 1 was given award 2, it entered. If he/she got this twice, only one row would be in the table, and duplicate would be set to 1. And again, 2, etc. When user 2 was given award 1, it entered. If he/she got this twice, duplicate updated, etc. etc. But when user 1 is given award 1 (after user 2 has already been awarded it), user 2 (with award 1)'s duplicate field increases and nothing is added to user 1. Sorry if that's a little n00bish. Really appreciate the help! Jack

    Read the article

  • Linq generic Expression in query on "element" or on IQueryable (multiple use)

    - by Bogdan Maxim
    Hi, I have the following expression public static Expression<Func<T, bool>> JoinByDateCheck<T>(T entity, DateTime dateToCheck) where T : IDateInterval { return (entityToJoin) => entityToJoin.FromDate.Date <= dateToCheck.Date && (entityToJoin.ToDate == null || entityToJoin.ToDate.Value.Date >= dateToCheck.Date); } IDateInterval interface is defined like this: interface IDateInterval { DateTime FromDate {get;} DateTime? ToDate {get;} } and i need to apply it in a few ways: (1) Query on Linq2Sql Table: var q1 = from e in intervalTable where FunctionThatCallsJoinByDateCheck(e, constantDateTime) select e; or something like this: intervalTable.Where(FunctionThatCallsJoinByDateCheck(e, constantDateTime)) (2) I need to use it in some table joins (as linq2sql doesn't provide comparative join): var q2 = from e1 in t1 join e2 in t2 on e1.FK == e2.PK where OtherFunctionThatCallsJoinByDateCheck(e2, e1.FromDate) or var q2 = from e1 in t1 from e2 in t2 where e1.FK == e2.PK && OtherFunctionThatCallsJoinByDateCheck(e2, e1.FromDate) (3) I need to use it in some queries like this: var q3 = from e in intervalTable.FilterFunctionThatCallsJoinByDateCheck(constantDate); Dynamic linq is not something that I can use, so I have to stick to plain linq. Thank you Clarification: Initially I had just the last method (FilterFunctionThatCallsJoinByDateCheck(this IQueryable<IDateInterval> entities, DateTime dateConstant) ) that contained the code from the expression. The problem is that I get a SQL Translate exception if I write the code in a method and call it like that. All I want is to extend the use of this function to the where clause (see the second query in point 2)

    Read the article

  • Exchange-Server Query

    - by Rudi Kershaw
    First, a little background. I've recently been taken on as a web and software developer for a small company, who has no other in-house IT support. They've been asking my opinion on lots of IT subjects that are quite far out of my comfort zone. I'm definitely not a network admin. Their IT consultancy contractor is pushing them to upgrade their dedicated exchange server, even though it seems like the one they currently have has a lot of life left in it and is running problem free. They say it's "coming to the natural end of it's life". They want to install a monster with a Xeon E5-2420, 32GB RAM, 2x 1TB HDDs, Windows Server 2012 and Microsoft Exchange 2010. They want to charge a small fortune for it. Basically, this system seems massively over the top seeing as it won't be doing anything else other than running as an exchange server for a company with less than 25 email accounts. My employers also have a file server system in-house that hosts three web apps, an SQL server, their local domain, print server and shared folders. That machine is using the same specs as the proposed new one, and it is barely using any of it's potential. I asked if Microsoft Exchange 2010 could be installed on their file server, but they said that MS Exchange can't run on the same system as an SQL server because for some reason they will eat up each others resources (even though the SQL server isn't touching 1% of the current system's CPU or RAM). My question is really, are they trying to rip my employers off? Could MS Exchange be installed on their other server (on a virtual instance or not), or does the old one even need replacing at all? Going with their current suggestion will cost the company in excess of £6k, and it seems entirely unnecessary. I apologies, because I know this is probably a little thin on details, but if I carry on I could end up writing a massive essay that no-one will want to read. I've been doing my research, but I'm not knowledgeable enough make any hard decisions. Let me know if you need any more details. Thank you for any help you can offer. Further Details: The new exchange would need to support Outlook Web App, 25 users, a few public mailboxes, and email exchange with Blackberries.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >