Search Results

Search found 7517 results on 301 pages for 'fast debugger'.

Page 198/301 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Are there still completely new programming languages and -paradigms to be born?

    - by llasa
    Are there still completely new programming languages and -paradigms (which will actually go mainstream and still be used decades after their appearance) to be born? What I'm talking about are groundbreaking things like the rise of object oriented programming, C++, or PHP. With new programming languages I mean that they actually are completely different from what you know, as different as when you set a guy who used assembler for a decade, and even programmed some kind of 3D game in it, in front of something as high-level as PHP, Ruby or Python? Which new paradigms and programming languages are there to come? What could be different about them? Who will possibly create them and how fast will they rise?

    Read the article

  • building an ASP NET MVC site, should i go with linq to sql?

    - by aspm
    so i'm about to start a new website from scratch and i've spent about a week trying to figure out what technology to go with. i'm sold on ASP NET MVC. i'm 100% sure i'm going to love using that. but what i am not so sure about yet is using LINQ 2 SQL. so far i've gathered some data... 1) stack overflow uses it - can't be that bad 2) can be REALLY slow if you don't take advantage of compiled queries 3) will always be slower than ADO net, but can be almost just as fast if using #2 in the proper places 4) is NOT the preferred MS solution (there was a thread here on SO about dropping support) i'm itching to use it, but just want to make sure it's the best for me. i come from a heavy ADO/stored procedure and traditional asp net background (this will be my first experience with ASP MVC).

    Read the article

  • Is Python good for highload web projects?

    - by Vitali Fokin
    Hello! I decidet to start my own web project. It should be highload project, and I can't decide which technologies should I use. I'm good in ASP.NET MVC, but I like languages like Python more than C#. I read a lot about Python and Django/Pylons/etc but I didn't find any good examples of highload projects on python. So, the question is: Is Python good for highload project? Is it enough fast? And if it is, are python frameworks like django/pylons/etc good for this? Or asp.net mvc will be better choice? PS, I'm not interesting in Java, Ruby and PHP :) So, I'm choosing only between Python + django/pylons/etc and asp.net mvc. Thanks in advance. Please, don't make holywars :)

    Read the article

  • is it posible to upload directly to remote server using SFTP on ASP.net MVC

    - by DucDigital
    Hi! I am currently develope something using asp.net MVC, im still quite not experience with it so please help me out. I have a form for user to upload Video. The current ideal concept to upload to remote server is to Upload it to to the current server, then use FTP to push it to a remote server. For me, this is not quite fast since you have to upload to current server (Time x1) and then the current server push to new server (Time x2) so it's double the time. So my idea is to make user upload it to the current server, and WHILE user is uploading, the current server add the file to DB and also send the file to the remote server at the same time using SFTP... is it posible and are there any security hole in this concept? Thank you very much

    Read the article

  • What the difference between zend framework and Wordpress as framework ?

    - by justjoe
    i only know wordpress and start to seek another alternative framework, zend. i heard hearsay that zend's better from others framework. if you're "a serous coder", or try to act like one, you need to use it on building your web app. some said zend is better. But it's subjective. It's fast ans secure. But nobody tell me the reason or at leas compare it with with wordpress. ultimate question : Do zend have theme or plugin just like wordpress ? any hint will be helpful

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • Why is doing a top(1) on an indexed column in mssql slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work?

    Read the article

  • Scalable (half-million files) version control system

    - by hashable
    We use SVN for our source-code revision control and are experimenting using it for non-source-code files. We are working with a large set (300-500k) of short (1-4kB) text files that will be updated on a regular basis and need to version control it. We tried using SVN in flat-file mode and it is struggling to handle the first commit (500k files checked in) taking about 36 hours. On a daily basis, we need the system to be able to handle 10k modified files per commit transaction in a short time (<5 min). My questions: Is SVN the right solution for my purpose. The initial speed seems too slow for practical use. If Yes, is there a particular svn server implementation that is fast? (We are currently using the gnu/linux default svn server and command line client.) If No, what are the best f/oss/commercial alternatives Thanks

    Read the article

  • Given a few strings, how many strings can be lexicographically least by modifying the alphabet?

    - by Jackson W
    Number of strings can be huge as in 30000. Given N strings, output which ones can be lexicographically least after modifying the english alphabet. e.g. acdbe...... for example if the strings were: omm moo mom ommnom "mom" is already lexicographically least with the original english alphabet. we can make the word "omm" least by switching "m" and "o" in the alphabet ("abcdefghijklonmpqrstuvwxyz"). the other ones you cant make lexicographically last, no matter what you do. any fast way to do this? I have no ways to approach this except try every single possible alphabet

    Read the article

  • Multi-threaded downloader in C# question

    - by blez
    Currently I have multi-threaded downloader class that uses HttpWebRequest/Response. All works fine, it's super fast, BUT.. the problem is that the data needs to be streamed while it's downloading to another app. That means that it must be streamed in the right order, the first chunk first, and then the next in the queue. Currently my downloader class is sync and Download() returns byte[]. In my async multi-threaded class I make for example, list with 4 empty elements (for slots) and I pass each index of the slot to each thread using the Download() function. That simulates synchronization, but that's not what I need. How should I do the queue thing, to make sure the data is streamed as soon as the first chunk start downloading.

    Read the article

  • What kind of good approaches use c++ programmers for storing error messages?

    - by Narek
    Say I have a huge code and have different kinds of error messages. For that I want to have a separate place where I store error codes and error messages. For example, for an error that occured because the program could not open a file I stroe: F001 "Can not open a file." "The same error message in another language" "The same error message in third language" What is the best way of storing different kind of error messages and codes in a file for c++ programmer in order to use that in a programme fast and easily? FYI I am working with Qt lib.

    Read the article

  • C++ union assignment, is there a good way to do this?

    - by Sqeaky
    I am working on a project with a library and I must work with unions. Specifically I am working with SDL and the SDL_Event union. I need to make copies of the SDL_Events, and I could find no good information on overloading assignment operators with unions. Provided that I can overload the assignment operator, should I manually sift through the union members and copy the pertinent members or can I simply come some members (this seems dangerous to me), or maybe just use memcpy() (this seems simple and fast, but slightly dangerous)? If I can't overload operators what would my best options be from there? I guess I could make new copies and pass around a bunch of pointers, but in this situation I would prefer not to do that. Any ideas welcome!

    Read the article

  • django url tag performance

    - by zxygentoo
    I was trying to integrate django-voting into my project following the RedditStyleVoting instruction. In my urls.py, i did something like this: url(r'^sections/(?P<object_id>\d+)/(?P<direction>up|down|clear)vote/?$', vote_on_object, dict( model=Section, template_object_name='section', template_name='script/section_confirm_vote.html', allow_xmlhttprequest=True ), name="section_vote", then, in my template: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} It takes over 1.3s to load the page, but by hard coding it like this: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} I got 50ms. Just avoid the url tag resolving stuff I got a 20+ times performance improvement. Is there something I did wrong? If not, then what's the best practice here, should we do things the right way or the fast way?

    Read the article

  • MPMoviePlayerController seems to make 2 calls for each movie

    - by user76328
    I seem to have an issue where an iphone app using the MPMoviePlayerController seems to make 2 calls to the server for each video it wants to play back. This occurs with iphone 3.x OS and libraries but not with iphone 2.x. I know that iphone does progressive download and will make multiple 206 requests, etc. but as far as our back end is concerned the player appears to make 2 separate sessions. This only appears to be an issue with iPhone native apps and not iphone videos played through safari. Additional info from apple: iPhone OS 3.0 added support for streaming audio and video over HTTP, and MPMoviePlayerController must validate the media before playback to determine if it is streaming content or progressively downloaded content. This is the delay you are experiencing. On a fast network, the delay should be minimized. Is this double check causing 2 sessions be created for each video request? Any one else seeing same issue? Is there a remedy?

    Read the article

  • Is it possible to generate plain-old XML using Haml?

    - by lsdr
    I've been working on a piece of software where I need to generate a custom XML file to send back to a client application. The current solutions on Ruby/Rails world for generating XML files are slow, at best. Using builder or event Nokogiri, while have a nice syntax and are maintainable solutions, they consume too much time and processing. I definetly could go to ERB, which provides a good speed at the expense of building the whole XML by hand. HAML is a great tool, have a nice and straight-forward syntax and is fairly fast. But I'm struggling to build pure XML files using it. Which makes me wonder, is it possible at all? Does any one have some pointers to some code or docs showing how to do this, build a full, valid XML from HAML?

    Read the article

  • Dynamically generating high performance functions in clojure

    - by mikera
    I'm trying to use Clojure to dynamically generate functions that can be applied to large volumes of data - i.e. a requirement is that the functions be compiled to bytecode in order to execute fast, but their specification is not known until run time. e.g. suppose I specify functions with a simple DSL like: (def my-spec [:add [:multiply 2 :param0] 3]) I would like to create a function compile-spec such that: (compile-spec my-spec) Would return a compiled function of one parameter x that returns 2x+3. What is the best way to do this in Clojure?

    Read the article

  • How do I choose a database?

    - by liamzebedee
    I need a comparison table of some sort for database varieties (MySQL, SQLite etc.). I can't find one. My use case is, I am implementing storage of objects in a distributed hash table. I need a database solution that is: Fast for sorting Simplistic (no users, preferably no additional structures like multiple tables etc.) Concurrent (if possible) Multi-platform File based (not stored in memory primarily) Centralized I will be programming in Go. As I understand, I believe I need what is called a Document Orientated Database, because I am storing objects, identified by keys. EDIT: While I am implementing a DHT, I will also be storing metadata about the objects, such as access counts etc. It would also be preferable to have TLL (time to live)

    Read the article

  • which layout engine for finding coordinates of html elements on the web page?

    - by Mexx
    I am doing some web data classification task and was thinking if I could get the co-ordinates of html elements as they would appear on a web-browser without taking into consideration any css or javascript being referred in the web page. My language of programming is c++ and the need results for a couple million of pages, so it has to be fast. I know there is a Microsoft COM component which renders the page in a web browser control and then can be queried for position of different html tags. But this is not suitable in my case as it first renders the whole page which takes up a lot of time. So as I found out, there are open-source layout engines WebKit, Gecko that can probably be used for this. But that's a huge piece of code and I need someone to direct me to the right classes or right modules to look into or any previous/similar work someone has done previously. Also, please let me know what you guys think is a good choice if I want to customize the existing code for use with multiple threads to make it faster. Thanks

    Read the article

  • Quick way to do data lookup in PHP

    - by Ghostrider
    I have a data table with 600,000 records that is around 25 megabytes large. It is indexed by a 4 byte key. Is there a way to find a row in such dataset quickly with PHP without resorting to MySQL? The website in question is mostly static with minor PHP code and no database dependencies and therefore fast. I would like to add this data without having to use MySQL if possible. In C++ I would memory map the file and do a binary search in it. Is there a way to do something similar in PHP?

    Read the article

  • NoSQL for concurrent reads/writes

    - by Mickael Marrache
    After getting some performance issues for an application using a MySQL database, I'm thinking of using NoSQL solutions. My architecture is as follows: One application receives messages from the network at a high throughput (i.e. 50000 messages/sec). Each message is stored in the DB, so it's important for the write rate to be as fast as the arrival rate. Then, I also have some PHP pages that accesses the DB to get the data stored by the other application. It's important for me that the retrieved data is as relevant as possible (i.e. not old data, let's say not more than 5 seconds old). Also, the data is not critical, so I don't need any security mechanism to avoid losing the data. I see there are a lot of NoSQL solutions, but I don't know if they are all relevant. Could you please provide me some directions. Thanks

    Read the article

  • Help with concept - filters and number of items

    - by dreamer
    Please check http://www.alibaba.com/catalogs/cid/702/Laptops.html they have nice filter here with number of items for each. Note one detail - they have locations here. Same thing on olx.com - location and number of items for each category. Now imagine I have tables: [products] (Id, Name, CategoryId, LocationId) [Categories] (Id,Name) [Location] (Id, Name) My question how can I do the same, cause count things even with caching looks expensive? And they give results pretty fast... Please advice with possible ways to do that in ASP.NET, C#, MVC, MS SQL, but avoice simple answers like "count and change" Thank you in advance.

    Read the article

  • What are some good ways to do intermachine locking?

    - by mike
    Our server cluster consists of 20 machines, each with 10 pids of 5 threads. We'd like some way to prevent any two threads, in any pid, on any machine, from modifying the same object at the same time. Our code's written in Python and runs on Linux, if that helps narrow things down. Also, it's a pretty rare case that two such threads want to do this, so we'd prefer something that optimizes the "only one thread needs this object" case to be really fast, even if it means that the "one thread has locked this object and another one needs it" case isn't great. What are some of the best practices?

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • How do I use Declarations (type, inline, optimize) in Scheme?

    - by kunjaan
    How do I declare the types of the parameters in order to circumvent type checking? How do I optimize the speed to tell the compiler to run the function as fast as possible like (optimize speed (safety 0))? How do I make an inline function in Scheme? How do I use an unboxed representation of a data object? And finally are any of these important or necessary? Can I depend on my compiler to make these optimizations? thanks, kunjaan.

    Read the article

  • Which language + framework should I use for building standalone clients for my PHP webapp?

    - by Jagira
    Hello, I have a PHP web application which basically maintains a set of user profiles and their records. My users are using the app via browser. I am planning to build a standalone desktop client/app for WINDOWS OS, in which a user can login, retrieve and modify the records. Which language + framework will be simple, fast and lightweight to use? I can think of the following options: Microsoft Visual Basic - simplest? Microsoft Visual C++ Python PHP + bambalam compiler Are there any other options? And which of these is better?

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >