Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 299/365 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • How to separate sets of numbers onto separate lines

    - by Fred
    About the script: The script below will create 300 sets of random characters. What is presently happening, is that it creates them but shows them all on one line, in one big chunk. With all the searching and testing I've done to try and achieve this, I have had no success. I would like to know which code and where to put it, so that each SET (300) of 15 characters long, will show and be saved to file. Here is my script: <?php function GetID($x){ $characters = array_merge(range('A','Z'),range('a','z'),range(2,9)); shuffle($characters); for($x=0;$x<=299;$x++){ } for (; strlen($ReqID)<$x;){ $ReqID .= $characters[mt_rand(0, count($characters))]; } return $ReqID; } $ReqID .= GetID(5); $ReqID .= "-"; $ReqID .= GetID(5); $ReqID .= "-"; $ReqID .= GetID(5); echo $ReqID; $fh = fopen("file.txt","a+"); fwrite($fh, ("$ReqID")."\n"); fclose($fh); ?>

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

  • How do I access the names of VB6 modules from code?

    - by Mark Bertenshaw
    Hi All - It is unlikely that there is an answer for this one, but I'm asking anyway. I am currently maintaining some code, which is likely to be refactored soon. Before that happens, I want to make the standard error handling code, which is injected by an Add-In, more efficient and take up less space. One thing that annoys me is that every module has a constant called m_ksModuleName that is used to construct a big string, which is then rethrown from the error handler so we can trace the error stack. This is all template code, i.e. repetivitve, but I could easily strip it down to a procedure call. Now, I have fixed the code so that you can pass the Me reference to the procedure - but you can't do that for the BAS modules. Nor can you access the project name (the part which would be passed as part of a ProgramID, for instance) - although you get given it when you raise an error yourself. All these strings are contained in the EXE, DLL or OCX - believe me, I've used a debugger to find them. But how can I access these in code? -- Mark Bertenshaw

    Read the article

  • To use the 'I' prefix for interfaces or not to

    - by ng
    That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway. So would not using this convention cause confusion? Are there any c# projects or libraries of note that drop this convention? Do any c# projects that mix conventions, as unfortunately Apache Wicket does? The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Help regarding database and logic layer for my ASP.NET MVC application

    - by Ismail S
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right?

    Read the article

  • format an xml string in Ruby

    - by user1476512
    given an xml string like this : <some><nested><xml>value</xml></nested></some> what's the best option(using ruby) to format it readable like : <some> <nested> <xml>value</xml> </nested> </some> I've found an answer here: what's the best way to format an xml string in ruby?, which is really helpful. But it formats xml like: <some> <nested> <xml> value </xml> </nested> </some> As my xml string is a little big in length. So it is not readable in this format. Thanks in advance!

    Read the article

  • Problem with floating divs in IE8

    - by hivehicks
    I want to make two block stand side by side. In Opera, Chrome and Firefox I get the expected result with the code I use. But IE8 refuses to display it correctly. Here's IE8 screenshot: http://ipicture.ru/upload/100405/RCFnQI7yZo.png And Chrome screenshot (how it should look like): http://ipicture.ru/upload/100405/4x95HC33zK.png Here's my HTML: <div id="balance-container"> <div id="balance-info-container"> <p class="big" style="margin-bottom: 5px;"> <strong> <span style="color: #56666F;">??????:</span> <span style="color: #E12122;">-2312 ???</span> </strong> </p> <p class="small minor"><strong>????????? 1000 ???. ?? 1.05.10</strong></p> </div> <div id="balance-button-container"> <button id="pay-button" class="green-button">????????? ????</button> </div> </div> And CSS: #balance-container { margin-left: auto; margin-right: auto; width: 390px; } #balance-info-container, #balance-button-container { float: left; } #balance-info-container { width: 250px; }

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Why does Ruby have Rails while Python has no central framework?

    - by yar
    This is a(n) historical question, not a comparison-between-languages question: This article from 2005 talks about the lack of a single, central framework for Python. For Ruby, this framework is clearly Rails. Why, historically speaking, did this happen for Ruby but not for Python? (or did it happen, and that framework is Django?) Also, the hypothetical questions: would Python be more popular if it had one, good framework? Would Ruby be less popular if it had no central framework? [Please avoid discussions of whether Ruby or Python is better, which is just too open-ended to answer.] Edit: Though I thought this is obvious, I'm not saying that other frameworks do not exist for Ruby, but rather that the big one in terms of popularity is Rails. Also, I should mention that I'm not saying that frameworks for Python are not as good (or better than) Rails. Every framework has its pros and cons, but Rails seems to, as Ben Blank says in the one of the comments below, have surpassed Ruby in terms of popularity. There are no examples of that on the Python side. WHY? That's the question.

    Read the article

  • Unpredictable CCK field name in returned View data

    - by AK
    I'm using views_get_view_result to directly access the data in a view. I've stumbled upon this odd behavior where cck fields are prefixed with the first field name as a query optimization. Explained here. What's bizarre though is that fields are named differently depending on whether I retrieve that data as Anonymous or as Admin. I'm pretty sure all my permissions are set up, and the view itself has no restrictions. What is going on here? This is a big problem since I can't know how to retrieve a field. Here's a dump of the two view results. Notice that node_data_field_game_date_field_game_home_score_value != node_data_field_game_official_field_game_home_score_value. // View as Admin stdClass Object ( [nid] => 3191 [node_data_field_game_date_field_game_date_value] => 2010-03-27T00:00:00 [node_type] => game [node_vid] => 5039 [node_data_field_game_date_field_game_official_value] => 0 [node_node_data_field_game_home_team_title] => TeamA [node_node_data_field_game_home_team_nid] => 3396 [node_data_field_game_date_field_game_home_score_value] => 68 [node_node_data_field_game_visitor_team_title] => TeamB [node_node_data_field_game_visitor_team_nid] => 3442 [node_data_field_game_date_field_game_visitor_score_value] => 118 ) // View as Anonymous stdClass Object ( [nid] => 3191 [node_data_field_game_date_field_game_date_value] => 2010-03-27T00:00:00 [node_type] => game [node_vid] => 5039 [node_data_field_game_official_field_game_official_value] => 0 [node_node_data_field_game_home_team_title] => TeamA [node_node_data_field_game_home_team_nid] => 3396 [node_data_field_game_official_field_game_home_score_value] => 68 [node_node_data_field_game_visitor_team_title] => TeamB [node_node_data_field_game_visitor_team_nid] => 3442 [node_data_field_game_official_field_game_visitor_score_value] => 118 )

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Wasteful Ajax Page Loading

    - by Matt Dawdy
    I've started a new job, and the portion of the project I'm working has a very odd structure. Every pages is a .Net aspx page, and it loads just fine, but nothing is really done at load time. Everything is really loaded from a jquery document.onready handler. What is even more...interesting...is that the onready handler calls some ajax calls that drop entire .aspx pages into divs on the page, but first it strips out several parts of the the returned page. This is the "magic" script the previous programmer ran on all the returned html from his ajax calls: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } He then intercepts any kind of form postback and runs his own form submission function: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } Am I way offbase in thinking that this is less than optimal? I mean, how does a browser take out the html and head and other tags that don't have anything to do with what you are really trying to drop into that div? Also, he's returning things like asp:gridview controls, and the associate viewstate, which can be quite large if his dataset is big. Has anyone seen this before?

    Read the article

  • How to Alphabetize a CSS file in Vim

    - by Kev
    I get a CSS file: div#header h1 { z-index: 101; color: #000; position: relative; line-height: 24px; margin-right: 48px; border-bottom: 1px solid #dedede; font-size: 18px; } div#header h2 { z-index: 101; color: #000; position: relative; line-height: 24px; margin-right: 48px; border-bottom: 1px solid #dedede; font-size: 18px; } I want to Alphabetize lines between the {...} div#header h1 { border-bottom: 1px solid #dedede; color: #000; font-size: 18px; line-height: 24px; margin-right: 48px; position: relative; z-index: 101; } div#header h2 { border-bottom: 1px solid #dedede; color: #000; font-size: 18px; line-height: 24px; margin-right: 48px; position: relative; z-index: 101; } I map F7 to do it nmap <F7> /{/+1<CR>vi{:sort<CR> But I need to press F7 over and over again to get the work done. If the CSS file is big, It's time-consuming & easily get bored. I want to get the cmds piped. So that, I only press F7 once! Any idea? thanks!

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • Installing SVN plugin for Eclipse on Ubuntu

    - by Zac
    I am a brand new Linux user configuring my first-ever dev sandbox in Ubuntu. I have installed Java and Eclipse and am trying to get either Subversive or Subclipse (I don't have a preference either way) but have a few questions before I start that process. I just opened Synaptic and downloaded subversion through it. (1) I'm not really sure how SVN deploys locally. My understanding is that SVN has a client and a server; the server manages the repository(ies) and the clieent just sends commands to the server. Is this correct? If so, then what did I download through Synaptic? The client, and/or the server? (2) Do these Eclipse plugins come with SVN (client or server...?) or do you have to pre-install SVN prior to installing these plugins? Basically: is SVN a pre-req for Subclipse or Subversive? Looking back at these 2 questions if someone could first explain to me the architecture of SVN, then explain how that architecture translates to downloading SVN via Synaptic, and then how it translates to downloading/installing either Eclipse plugin, I would see the "big picture" a lot better. Thanks for any and all help!

    Read the article

  • Touch draw in Quatz 2D/Core Graphics

    - by OgreSwamp
    Hello, I'm trying to implement "hand draw tool". At the moment algorythm looks like that (I don't insert any code because methods are quite big, will try to explain an idea): Drawing In touchesStarted: method I create NSMutableArray *pointsArray and add point into it. Call setNeedsDisplay: method. In touchesMoved: method I calculate points between last added point from the pointsArray and current point. Add all points to the pointsArray. Call setNeedsDisplay: method. In touchesFinished: event I calculate points between last added point from the array and current point. Set flag touchesWereFinished. Call setNeedsDisplay:. Render: drawRect: method checks is pointsArray != nil and is there any data in it. If there is - it starts to traw circles in each point of this array. If flag touchesWereFinished is set - save current context to the UIImage, release pointsArray, set it to nil and reset the flag. There are a lot disadvantages of this method: It is slow It becomes extremely slow when user touches and move finger for long time. Array becomes enormous "Lines" composed by circles are ugly I would like to change my algorithm to make it bit faster and line smoother. In result I would like to have lines like on the picture at following URL (sorry, not enough reputation to insert an image): http://2.bp.blogspot.com/_r5VzEAUYXJ4/SrOYp8tJCPI/AAAAAAAAAMw/ZwDKXiHlhV0/s320/SketchBook+Mobile(4).png Can you advice me, ho I can draw lines this way (smooth and slim on the edges)? I thought to draw circles with alpha gradient on the edges (to make lines smoother), but it will be extremely slowly IMHO. Thanks for help

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Convert long/lat to pixel x/y on a given picure.

    - by Kalinin
    I have a city "map" (for example - Moscow). She in accuracy repeats the contours the given city in google maps (that is it is copied from google maps and it is a little processed, but the sense remained the same). Also I have object co-ordinates in a city (in co-ordinates of google). Problem: how to convert google co-ordinates to the co-ordinates of my picture (that is in pixels on OX and OY on a picture). That is I receive google-co-ordinates and it is necessary for me to draw this point on my picture. The most desired variant of the answer - is based on javascript, but it is possible and on php. I know that on small scales (for example on city scales) it to make simply enough (it is necessary to learn what google-co-ordinates has one of picture corners, then to learn "price" of one pixel in google-co-ordinates on a picture on axes OX and OY separately). But on the big scales (country scale) "price" of one pixel will be not a constant, and will vary strongly enough and the method described above cannot be applied. How to solve a problem on country scales?

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

  • Combining two queries on same table

    - by user1830856
    I've looked through several previous questions but I am struggling to apply the solutions to my specific example. I am having trouble combining query 1 and query 2. My query originally returned (amongst other details) the values "SpentTotal" and "UnderSpent" for all members/users for the current month. My issue has been adding two additional columns to this original quert that will return JUST these two columns (Spent and Overspent) but for the previous months data Original Query #1: set @BPlanKey = '##CURRENTMONTH##' EXECUTE @RC = Minimum_UpdateForPeriod @BPlanKey SELECT cm.clubaccountnumber, bp.Description , msh.PeriodMinObligation, msh.SpentTotal, msh.UnderSpent, msh.OverSpent, msh.BilledDate, msh.PeriodStartDate, msh.PeriodEndDate, msh.OverSpent FROM MinimumSpendHistory msh INNER JOIN BillPlanMinimums bpm ON msh.BillingPeriodKey = @BPlanKey and bpm.BillPlanMinimumKey = msh.BillPlanMinimumKey INNER JOIN BillPlans bp ON bp.BillPlanKey = bpm.BillPlanKey INNER JOIN ClubMembers cm ON cm.parentmemberkey is null and cm.ClubMemberKey = msh.ClubMemberKey order by cm.clubaccountnumber asc, msh.BilledDate asc Query #2, query of all columns for PREVIOUS month, but I only need two (spent and over spent), added to the query from above, joined on the customer number: set @BPlanKeyLastMo = '##PREVMONTH##' EXECUTE @RCLastMo = Minimum_UpdateForPeriod @BPlanKeyLastMo SELECT cm.clubaccountnumber, bp.Description , msh.PeriodMinObligation, msh.SpentTotal, msh.UnderSpent, msh.OverSpent, msh.BilledDate, msh.PeriodStartDate, msh.PeriodEndDate, msh.OverSpent FROM MinimumSpendHistory msh INNER JOIN BillPlanMinimums bpm ON msh.BillingPeriodKey = @BPlanKeyLastMo and bpm.BillPlanMinimumKey = msh.BillPlanMinimumKey INNER JOIN BillPlans bp ON bp.BillPlanKey = bpm.BillPlanKey INNER JOIN ClubMembers cm ON cm.parentmemberkey is null and cm.ClubMemberKey = msh.ClubMemberKey order by cm.clubaccountnumber asc, msh.BilledDate asc Big thank you to any and all that are willing to lend their help and time. Cheers! AJ CREATE TABLE MinimumSpendHistory( [MinimumSpendHistoryKey] [uniqueidentifier] NOT NULL, [BillPlanMinimumKey] [uniqueidentifier] NOT NULL, [ClubMemberKey] [uniqueidentifier] NOT NULL, [BillingPeriodKey] [uniqueidentifier] NOT NULL, [PeriodStartDate] [datetime] NOT NULL, [PeriodEndDate] [datetime] NOT NULL, [PeriodMinObligation] [money] NOT NULL, [SpentTotal] [money] NOT NULL, [CurrentSpent] [money] NOT NULL, [OverSpent] [money] NULL, [UnderSpent] [money] NULL, [BilledAmount] [money] NOT NULL, [BilledDate] [datetime] NOT NULL, [PriorPeriodMinimum] [money] NULL, [IsCommitted] [bit] NOT NULL, [IsCalculated] [bit] NOT NULL, [BillPeriodMinimumKey] [uniqueidentifier] NOT NULL, [CarryForwardCounter] [smallint] NULL, [YTDSpent] [money] NOT NULL, [PeriodToAccumulateCounter] [int] NULL, [StartDate] [datetime] NOT NULL,

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >