Search Results

Search found 90546 results on 3622 pages for 'code optimization'.

Page 154/3622 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • Profiling short-lived Java applications

    - by ejel
    Is there any Java profiler that allows profiling short-lived applications? The profilers I found so far seem to work with applications that keep running until user termination. However, I want to profile applications that work like command-line utilities, it runs and exits immediately. Tools like visualvm or NetBeans Profiler do not even recognize that the application was ran. I am looking for something similar to Python's cProfile, in that the profiler result is returned when the application exits.

    Read the article

  • The best way to implement drawing features like Keynote

    - by Shamseddine
    Hi all, I'm trying to make a little iPad tool's for drawing simple geometrical objects (rect, rounded rect, ellipse, star, ...). My goal is to make something very close to Keynote (drawing feature), i.e. let the user add a rect (for instance), resizing it and moving it. I want too the user can select many objects and move them together. I've thought about at least 3 differents ways to do that : Extends UIView for each object type, a class for Rect, another for Ellipse, ... With custom drawing method. Then add this view as subview of the global view. Extends CALayer for each object type, a class for Rect, another for Ellipse, ... With custom drawing method. Then add this layer as sublayer of the global view layer's. Extends NSObject for each object type, a class for Rect, another for Ellipse, ... With just a drawing method which will get as argument a CGContext and a Rect and draw directly the form in it. Those methods will be called by the drawing method of the global view. I'm aware that the two first ways come with functions to detect touch on each object, to add easily shadows,... but I'm afraid that they are a little too heavy ? That's why I thought about the last way, which it seems to be straight forward. Which way will be the more efficient ??? Or maybe I didn't thought another way ? Any help will be appreciated ;-) Thanks.

    Read the article

  • Good Starting Points for Optimizing Database Calls in Ruby on Rails?

    - by viatropos
    I have a menu in Rails which grabs a nested tree of Post models, each which have a Slug model associated via a polymorphic association (using the friendly_id gem for slugs and awesome_nested_set for the tree). The database output in development looks like this (here's the full gist): SQL (0.4ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 39) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 40 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 SQL (0.3ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 40) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 41 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 ... Rendered shared/_menu.html.haml (907.6ms) What are some quick things I should always do to optimize this from the start (easy things)? Some things I'm thinking now are: Can Rails 3 eager load the whole Post tree + associated Slugs in one DB call? Can I do that easily with named scopes or custom SQL? What is best practice in this situation? Not really thinking about memcached in this situation as that can be applied to much more than just this.

    Read the article

  • Combining javascripts into a single file

    - by toomanyairmiles
    Having read up recently on yahoo's web optimisation tips and using YSlow I've implemented a few of their ideas on one of my sites http://www.gwynfryncottages.com you can see the file here http://www.gwynfryncottages.com/js/gw-custom.js. While this technique seems to work perfectly on most occasions, and really does speed up the site, I do notice a significantly higher number of errors where the javascripts don't load or don't load completely while I'm working on the site so three questions:- is combining scripts this way a good idea at all in terms of reliablity? is there any way to measure the number of errors? is there any way to 'pre-load' the javascript or ensure that the number of loading errors is reduced?

    Read the article

  • Can this MySQL subquery be optimised?

    - by Dan
    I have two tables, news and news_views. Every time an article is viewed, the news id, IP address and date is recorded in news_views. I'm using a query with a subquery to fetch the most viewed titles from news, by getting the total count of views in the last 24 hours for each one. It works fine except that it takes between 5-10 seconds to run, presumably because there's hundreds of thousands of rows in news_views and it has to go through the entire table before it can finish. The query is as follows, is there any way at all it can be improved? SELECT n.title , nv.views FROM news n LEFT JOIN ( SELECT news_id , count( DISTINCT ip ) AS views FROM news_views WHERE datetime >= SUBDATE(now(), INTERVAL 24 HOUR) GROUP BY news_id ) AS nv ON nv.news_id = n.id ORDER BY views DESC LIMIT 15

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Fastest way to put contents of Set<String> to a single String with words separated by a whitespace?

    - by Lars Andren
    I have a few Set<String>s and want to transform each of these into a single String where each element of the original Set is separated by a whitespace " ". A naive first approach is doing it like this Set<String> set_1; Set<String> set_2; StringBuilder builder = new StringBuilder(); for (String str : set_1) { builder.append(str).append(" "); } this.string_1 = builder.toString(); builder = new StringBuilder(); for (String str : set_2) { builder.append(str).append(" "); } this.string_2 = builder.toString(); Can anyone think of a faster, prettier or more efficient way to do this?

    Read the article

  • Float compile-time calculation not happening?

    - by Klaim
    A little test program: #include <iostream> const float TEST_FLOAT = 1/60; const float TEST_A = 1; const float TEST_B = 60; const float TEST_C = TEST_A / TEST_B; int main() { std::cout << TEST_FLOAT << std::endl; std::cout << TEST_C << std::endl; std::cin.ignore(); return 0; } Result : 0 0.0166667 Tested on Visual Studio 2008 & 2010. I worked on other compilers that, if I remember well, made the first result like the second result. Now my memory could be wrong, but shouldn't TEST_FLOAT have the same value than TEST_C? If not, why? Is TEST_C value resolved at compile time or at runtime? I always assumed the former but now that I see those results I have some doubts...

    Read the article

  • SEO for Ultraseek 5.7

    - by Adam N
    We've got Ultraseek 5.7 indexing the content on our corporate intranet site, and we'd like to make sure our web pages are being optimized for it. Which SEO techniques are useful for Ultraseek, and where can I find documentation about these features? Features I've considered implementing: Make the title and first H1 contain the most valuable information about the page Implement a sitemap.xml file Ping the Ultraseek xpa interface when new content is added Use "SEO-Friendly" URL strings Add Meta keywords to the HTML pages.

    Read the article

  • Javascript + Firebug : "cannot access optimized closure" What does it mean?

    - by interstar
    I just got the following error in a piece of javascript (in Firefox 3.5, with Firebug running) cannot access optimized closure I know, superficially, what caused the error. I had a line options.length() instead of options.length Fixing this bug, made the message go away. But I'm curious. What does this mean? What is an optimized closure? Is optimizing an enclosure something that the javascript interpretter does automatically? What does it do?

    Read the article

  • Combine query results from one table with the defaults from another

    - by pulegium
    This is a dumbed down version of the real table data, so may look bit silly. Table 1 (users): id INT username TEXT favourite_food TEXT food_pref_id INT Table 2 (food_preferences): id INT food_type TEXT The logic is as follows: Let's say I have this in my food preference table: 1, 'VEGETARIAN' and this in the users table: 1, 'John', NULL, 1 2, 'Pete', 'Curry', 1 In which case John defaults to be a vegetarian, but Pete should show up as a person who enjoys curry. Question, is there any way to combine the query into one select statement, so that it would get the default from the preferences table if the favourite_food column is NULL? I can obviously do this in application logic, but would be nice just to offload this to SQL, if possible. DB is SQLite3...

    Read the article

  • Why does adding Crossover to my Genetic Algorithm give me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Optimizing sorting container of objects with heap-allocated buffers - how to avoid hard-copying buff

    - by Kache4
    I was making sure I knew how to do the op= and copy constructor correctly in order to sort() properly, so I wrote up a test case. After getting it to work, I realized that the op= was hard-copying all the data_. I figure if I wanted to sort a container with this structure (its elements have heap allocated char buffer arrays), it'd be faster to just swap the pointers around. Is there a way to do that? Would I have to write my own sort/swap function? #include <deque> //#include <string> //#include <utility> //#include <cstdlib> #include <cstring> #include <iostream> //#include <algorithm> // I use sort(), so why does this still compile when commented out? #include <boost/filesystem.hpp> #include <boost/foreach.hpp> using namespace std; namespace fs = boost::filesystem; class Page { public: // constructor Page(const char* path, const char* data, int size) : path_(fs::path(path)), size_(size), data_(new char[size]) { // cout << "Creating Page..." << endl; strncpy(data_, data, size); // cout << "done creating Page..." << endl; } // copy constructor Page(const Page& other) : path_(fs::path(other.path())), size_(other.size()), data_(new char[other.size()]) { // cout << "Copying Page..." << endl; strncpy(data_, other.data(), size_); // cout << "done copying Page..." << endl; } // destructor ~Page() { delete[] data_; } // accessors const fs::path& path() const { return path_; } const char* data() const { return data_; } int size() const { return size_; } // operators Page& operator = (const Page& other) { if (this == &other) return *this; char* newImage = new char[other.size()]; strncpy(newImage, other.data(), other.size()); delete[] data_; data_ = newImage; path_ = fs::path(other.path()); size_ = other.size(); return *this; } bool operator < (const Page& other) const { return path_ < other.path(); } private: fs::path path_; int size_; char* data_; }; class Book { public: Book(const char* path) : path_(fs::path(path)) { cout << "Creating Book..." << endl; cout << "pushing back #1" << endl; pages_.push_back(Page("image1.jpg", "firstImageData", 14)); cout << "pushing back #3" << endl; pages_.push_back(Page("image3.jpg", "thirdImageData", 14)); cout << "pushing back #2" << endl; pages_.push_back(Page("image2.jpg", "secondImageData", 15)); cout << "testing operator <" << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[1]? " < " : " > ") << pages_[1].path().string() << endl; cout << pages_[1].path().string() << (pages_[1] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << "sorting" << endl; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; sort(pages_.begin(), pages_.end()); cout << "done sorting\n"; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; cout << "checking datas" << endl; BOOST_FOREACH (Page p, pages_) { char data[p.size() + 1]; strncpy((char*)&data, p.data(), p.size()); data[p.size()] = '\0'; cout << p.path().string() << " " << data << endl; } cout << "done Creating Book" << endl; } private: deque<Page> pages_; fs::path path_; }; int main() { Book* book = new Book("/some/path/"); }

    Read the article

  • Question about Cost in Oracle Explain Plan

    - by Will
    When Oracle is estimating the 'Cost' for certain queries, does it actually look at the amount of data (rows) in a table? For example: If I'm doing a full table scan of employees for name='Bob', does it estimate the cost by counting the amount of existing rows, or is it always a set cost?

    Read the article

  • Why index_merge is not used here using MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • How can I track the last location of a shipment effeciently using latest date of reporting?

    - by hash
    I need to find the latest location of each cargo item in a consignment. We mostly do this by looking at the route selected for a consignment and then finding the latest (max) time entered against nodes of this route. For example if a route has 5 nodes and we have entered timings against first 3 nodes, then the latest timing (max time) will tell us its location among the 3 nodes. I am really stuck on this query regarding performance issues. Even on few hundred rows, it takes more than 2 minutes. Please suggest how can I improve this query or any alternative approach I should acquire? Note: ATA= Actual Time of Arrival and ATD = Actual Time of Departure SELECT DISTINCT(c.id) as cid,c.ref as cons_ref , c.Name, c.CustRef FROM consignments c INNER JOIN routes r ON c.Route = r.ID INNER JOIN routes_nodes rn ON rn.Route = r.ID INNER JOIN cargo_timing ct ON c.ID=ct.ConsignmentID INNER JOIN (SELECT t.ConsignmentID, Max(t.firstata) as MaxDate FROM cargo_timing t GROUP BY t.ConsignmentID ) as TMax ON TMax.MaxDate=ct.firstata AND TMax.ConsignmentID=c.ID INNER JOIN nodes an ON ct.routenodeid = an.ID INNER JOIN contract cor ON cor.ID = c.Contract WHERE c.Type = 'Road' AND ( c.ATD = 0 AND c.ATA != 0 ) AND (cor.contract_reference in ('Generic','BP001','020-543-912')) ORDER BY c.ref ASC

    Read the article

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • Most flexible minimizer/compressor for ASP.NET MVC 2?

    - by AlexanderN
    From your experience, what's the most flexible minimizer/compressor (JS+CSS) for ASP.NET MVC you've dealt with? So far mbcompress doesn't seem to be too MVC friendly weboptimizer.codeplex.com lacks documentation clientdependency.codeplex.com is still in beta compress2 seems like a good candidate, but haven't tried it yet mvcscriptmanager only combines and compresses javascript but not CSS By flexible I mean Choose what should be compressed, minified, and combined Add exceptions. E.g. if debug don't compress XYZ.JS or don't minify ABC.CSS Caching In the end, it should help offer the best YSLOW score. If you know of any other assemblies out there, please list them also.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >