Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 731/916 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • UpdateModel not working consistently for me if a ddl gets hidden by jQuery

    - by awrigley
    Hi My jQuery code hides a ddl under certain circumstances. When this is the case, after submitting the form, using the UpdateModel doesn't seem to work consistently. My code in the controller: // POST: /IllnessDetail/Edit [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(IllnessDetail ill) { IllnessDetailFormViewModel mfv = new IllnessDetailFormViewModel(ill); if (ModelState.IsValid) { try { IllnessDetail sick = idr.GetLatestIllnessDetailByUsername(User.Identity.Name); UpdateModel(sick); idr.Save(); return RedirectToAction("Current", "IllnessDetail"); } catch { ModelState.AddRuleViolations(mfv.IllnessDetail.GetRuleViolations()); } } return View(new IllnessDetailFormViewModel(ill)); } I have only just started with MVC, and working under a deadline, so am still hazy as to how UpdateModel works. Debugging seems to reveal that the correct value is passed in to the action method: public ActionResult Edit(IllnessDetail ill) And the correct value is put into sick in the following line: IllnessDetailFormViewModel mfv = new IllnessDetailFormViewModel(ill); However, when all is said, done and returned to the client, what displays is the value of: sick.IdInfectiousAgent instead of the value of: ill.IdInfectiousAgent The only reason I can think of is that the ddlInfectiousAgent has been hidden by jQuery. Or am I barking up the wrong lamp post? Andrew

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • CSS3 animations - how to make different animations to different elements on scroll

    - by DeanDeey
    I have a question about the doing CSS3 animations on scroll.. I found a code, that shows the DIV element after the user scrolls down: <script type="text/javascript"> $(document).ready(function() { /* Every time the window is scrolled ... */ $(window).scroll( function(){ /* Check the location of each desired element */ $('.hideme').each( function(i){ var bottom_of_object = $(this).position().top + $(this).outerHeight(); var bottom_of_window = $(window).scrollTop() + $(window).height(); /* If the object is completely visible in the window, fade it it */ if( bottom_of_window > bottom_of_object ){ $(this).animate({'opacity':'1'},500); } }); }); }); </script> I'm building a site, which one will have about 5 different boxes, which one will be shown when the user scrolls down to each box. But my question is, how i make the animation inside those boxes. Foe example: in each box there is title, content and images. How do i make all the elements appear after each other, because with this code all the class elements are shown at once. But i would like that if user scrolls down, first the tittle appears, then the content and at the end the images. And then when user scrolls to the next box, the same process repeat's itself. I use some CSS3 delays, but in this case i don't know how long will user take to scroll down. thanks!

    Read the article

  • Single Responsibility Principle vs Anemic Domain Model anti-pattern

    - by Niall Connaughton
    I'm in a project that takes the Single Responsibility Principle pretty seriously. We have a lot of small classes and things are quite simple. However, we have an anemic domain model - there is no behaviour in any of our model classes, they are just property bags. This isn't a complaint about our design - it actually seems to work quite well During design reviews, SRP is brought out whenever new behaviour is added to the system, and so new behaviour typically ends up in a new class. This keeps things very easily unit testable, but I am perplexed sometimes because it feels like pulling behaviour out of the place where it's relevant. I'm trying to improve my understanding of how to apply SRP properly. It seems to me that SRP is in opposition to adding business modelling behaviour that shares the same context to one object, because the object inevitably ends up either doing more than one related thing, or doing one thing but knowing multiple business rules that change the shape of its outputs. If that is so, then it feels like the end result is an Anemic Domain Model, which is certainly the case in our project. Yet the Anemic Domain Model is an anti-pattern. Can these two ideas coexist? EDIT: A couple of context related links: SRP - http://www.objectmentor.com/resources/articles/srp.pdf Anemic Domain Model - http://martinfowler.com/bliki/AnemicDomainModel.html I'm not the kind of developer who just likes to find a prophet and follow what they say as gospel. So I don't provide links to these as a way of stating "these are the rules", just as a source of definition of the two concepts.

    Read the article

  • Scapy install issues. Nothing seems to actually be installed?

    - by Chris
    I have an apple computer running Leopard with python 2.6. I downloaded the latest version of scapy and ran "python setup.py install". All went according to plan. Now, when I try to run it in interactive mode by just typing "scapy", it throws a bunch of errors. What gives! Just in case, here is the FULL error message.. INFO: Can't import python gnuplot wrapper . Won't be able to plot. INFO: Can't import PyX. Won't be able to use psdump() or pdfdump(). ERROR: Unable to import pcap module: No module named pcap/No module named pcapy ERROR: Unable to import dnet module: No module named dnet Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 122, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 34, in _run_code exec code in run_globals File "/Users/owner1/Downloads/scapy-2.1.0/scapy/__init__.py", line 10, in <module> interact() File "scapy/main.py", line 245, in interact scapy_builtins = __import__("all",globals(),locals(),".").__dict__ File "scapy/all.py", line 25, in <module> from route6 import * File "scapy/route6.py", line 264, in <module> conf.route6 = Route6() File "scapy/route6.py", line 26, in __init__ self.resync() File "scapy/route6.py", line 39, in resync self.routes = read_routes6() File "scapy/arch/unix.py", line 147, in read_routes6 lifaddr = in6_getifaddr() File "scapy/arch/unix.py", line 123, in in6_getifaddr i = dnet.intf() NameError: global name 'dnet' is not defined

    Read the article

  • general learning methodology

    - by momo
    just wanted to hear on the different general learning paths people embark on when learning a new language/framework. the one i currently use, which is how i learned bash and am currently learning python, is: instant hacking tutorial (very short tutorial introducing the basic syntax, variable declaration, loops, data types, etc. and how they are generally used) in depth tutorial with good programming style and slightly topic-specific (e.g. Mark Pilgrim's Dive into Python), important topics for me personally are regex methods, file IO, and ways the different data types are utilized best (i wrote a very primitive bayesian spam filter using python's dictionaries to keep track of word occurrences) spaced-repition of syntax or short recipes (i use anki, with questions like 'create dictionary with filename and filesize metadata, human-readable' or simpler ones like 'match 0 - 3 occurences of the letter M in a string', or 'return/create an iterator from two sequences') the use of spaced-repitition has been invaluable, and i credit it with the ease that i can recall/create python algorithms. however, i've recently started looking into django, and i've found that spaced-repitition, at least in my case, doesn't work very well for learning a framework, it works best with short code recipes (either that or i should start looking into more basic django framework tutorials). the problem i'm encountering is that since framework programming is not only algorithms, but actually learning the API, which can be quite complex since you have to learn all the methods, modules, the places where they are stored, and the sequence of which things have to be done. for ex. in django to start a project that deals with polls (from the django tutorial), one has to create the project, edit the settings.py file, create the polls app, edit the models.py file (which requires knowing the classes that are present in the module models), edit the urls.py file, etc. i found that my spaced-repition method didn't work very well for this type of learning, so i wanted to ask you guys what method(s) you use for learning the different frameworks/APIs.

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

  • My android tests don't get internet access!

    - by Malachii
    The subject says it all. My application gets internet access thanks to the android.permission.INTERNET permission, but my test cases don't while using the instrumentation test runner. This means I can't test my server IO routines in my test cases. What's up? Here's my manifest in case it helps you. Thanks! Sorry about the lack of indents - could not get it working on short notice with this site. Thanks! <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.helloandroid" android:versionCode="1" android:versionName="1.0"> <uses-permission android:name="android.permission.INTERNET"></uses-permission> <application android:icon="@drawable/icon" android:label="@string/app_name"> <uses-library android:name="android.test.runner" /> <activity android:name=".HelloAndroid" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="2" /> <instrumentation android:name="android.test.InstrumentationTestRunner" android:targetPackage="qnext.mobile.redirect" android:label="Qnext Redirect Tests" /> </manifest>

    Read the article

  • questions about name mangling in C++

    - by Tim
    I am trying to learn and understand name mangling in C++. Here are some questions: (1) From devx When a global function is overloaded, the generated mangled name for each overloaded version is unique. Name mangling is also applied to variables. Thus, a local variable and a global variable with the same user-given name still get distinct mangled names. Are there other examples that are using name mangling, besides overloading functions and same-name global and local variables ? (2) From Wiki The need arises where the language allows different entities to be named with the same identifier as long as they occupy a different namespace (where a namespace is typically defined by a module, class, or explicit namespace directive). I don't quite understand why name mangling is only applied to the cases when the identifiers belong to different namespaces, since overloading functions can be in the same namespace and same-name global and local variables can also be in the same space. How to understand this? Do variables with same name but in different scopes also use name mangling? (3) Does C have name mangling? If it does not, how can it deal with the case when some global and local variables have the same name? C does not have overloading functions, right? Thanks and regards!

    Read the article

  • Silverlight 3 data-binding child property doesn't update

    - by sonofpirate
    I have a Silverlight control that has my root ViewModel object as it's data source. The ViewModel exposes a list of Cards as well as a SelectedCard property which is bound to a drop-down list at the top of the view. I then have a form of sorts at the bottom that displays the properties of the SelectedCard. My XAML appears as (reduced for simplicity): <StackPanel Orientation="Vertical"> <ComboBox DisplayMemberPath="Name" ItemsSource="{Binding Path=Cards}" SelectedItem="{Binding Path=SelectedCard, Mode=TwoWay}" /> <TextBlock Text="{Binding Path=SelectedCard.Name}" /> <ListBox DisplayMemberPath="Name" ItemsSource="{Binding Path=SelectedCard.PendingTransactions}" /> </StackPanel> I would expect the TextBlock and ListBox to update whenever I select a new item in the ComboBox, but this is not the case. I'm sure it has to do with the fact that the TextBlock and ListBox are actually bound to properties of the SelectedCard so it is listening for property change notifications for the properties on that object. But, I would have thought that data-binding would be smart enough to recognize that the parent object in the binding expression had changed and update the entire binding. It bears noting that the PendingTransactions property (bound to the ListBox) is lazy-loaded. So, the first time I select an item in the ComboBox, I do make the async call and load the list and the UI updates to display the information corresponding to the selected item. However, when I reselect an item, the UI doesn't change! For example, if my original list contains three cards, I select the first card by default. Data-binding does attempt to access the PendingTransactions property on that Card object and updates the ListBox correctly. If I select the second card in the list, the same thing happens and I get the list of PendingTransactions for that card displayed. But, if I select the first card again, nothing changes in my UI! Setting a breakpoint, I am able to confirm that the SelectedCard property is being updated correctly. How can I make this work???

    Read the article

  • PHP cURL error: "Empty reply from server"

    - by ABach
    All, I have a class function to interface with the RESTful API for Last.FM - its purpose is to grab the most recent tracks for my user. Here it is: private static $base_url = 'http://ws.audioscrobbler.com/2.0/'; public static function getTopTracks($options = array()) { $options = array_merge(array( 'user' => 'bachya', 'period' => NULL, 'api_key' => 'xxxxx...', // obfuscated, obviously ), $options); $options['method'] = 'user.getTopTracks'; // Initialize cURL request and set parameters $ch = curl_init(); curl_setopt_array($ch, array( CURLOPT_URL => self::$base_url, CURLOPT_POST => TRUE, CURLOPT_POSTFIELDS => $options, CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_TIMEOUT => 30, CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)' )); $results = curl_exec($ch); return $results; } This returns "Empty reply from server". I know that some have suggested that this error comes from some fault in network infrastructure; I do not believe this to be true in my case. If I run a cURL request through the command line, I get my data; the Last.FM service is up and accessible. Before I go to those folks and see if anything has changed, I wanted to check with you fine folks and see if there's some issue in my code that would be causing this. Thanks!

    Read the article

  • Message queue proxy in Python + Twisted

    - by gasper_k
    Hi, I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately. So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite. Now, the way I understand, there are these components in this service: message listener (listens to the messages from PHP and writes them to a Incoming Queue) DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness) MQ connection handler (keeps the connection to the MQ server online by reconnecting) message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db) I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop. The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly. Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?

    Read the article

  • Diagnosing IIS Shutdowns

    - by Tom Ritter
    Symptoms: I attach a debugger, I wait a little while, it automatically detaches I watch the event log during normal operation - after a single request comes in, it waits a little bit, the shuts down Disagnosing. I've followed the following steps for logging shutdowns in IIS: http://weblogs.asp.net/scottgu/archive/2005/12/14/433194.aspx http://blogs.msdn.com/tess/archive/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles.aspx I know these are working because... What I see in the Event Logs when I change the web.config: The description for Event ID 0 from source ASP.NET 2.0.50727.0 cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: _shutdownMessage=IIS configuration change HostingEnvironment initiated shutdown CONFIG change CONFIG change HostingEnvironment caused shutdown _shutdownStack= at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at System.Environment.get_StackTrace() at System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at System.Web.Hosting.HostingEnvironment.InitiateShutdown() at System.Web.Hosting.PipelineRuntime.StopProcessing() the message resource is present but the message is not found in the string/message table But it doesn't help because the mysetery error doesn't tell me anything. I see the same thing as from before I added this extra logging: The description for Event ID 0 from source ASP.NET 2.0.50727.0 cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: _shutdownMessage=HostingEnvironment initiated shutdown HostingEnvironment caused shutdown _shutdownStack= at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at System.Environment.get_StackTrace() at System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at System.Web.Hosting.HostingEnvironment.InitiateShutdown() at System.Web.Hosting.PipelineRuntime.StopProcessing() the message resource is present but the message is not found in the string/message table Anyone have any ideas for more debugging?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • PHP Regex to match lines with all-caps with occaisional hyphens.

    - by Yaaqov
    I'm trying to to convert an existing PHP Regular Expression match case to apply to a slightly different style of document. Here's the original style of the document: **FOODS - TYPE A** ___________________________________ **PRODUCT** 1) Mi Pueblito Queso Fresco Authentic Mexican Style Fresh Cheese; 2) La Fe String Cheese **CODE** Sell by date going back to February 1, 2009 And the successfully-running PHP Regex match code that only returns "true" if the line is surrounded by asterisks, and stores each side of the "-" as $m[1] and $m[2], respectively. if ( preg_match('#^\*\*([^-]+)(?:-(.*))?\*\*$#', $line, $m) ) { // only for **header - subheader** $m[2] is set. if ( isset($m[2]) ) { return array(TYPE_HEADER, array(trim($m[1]), trim($m[2]))); } else { return array(TYPE_KEY, array($m[1])); } } So, for line 1: $m[1] = "FOODS" AND $m[2] = "TYPE A"; Line 2 would be skipped; Line 3: $m[1] = "PRODUCT", etc. The question: How would I re-write the above regex match if the headers did not have the asterisks, but still was all-caps, and was at least 4 characters long? For example: FOODS - TYPE A ___________________________________ PRODUCT 1) Mi Pueblito Queso Fresco Authentic Mexican Style Fresh Cheese; 2) La Fe String Cheese CODE Sell by date going back to February 1, 2009 Thank you.

    Read the article

  • Ditching Django's models for Ajax/Web Services

    - by Igor Ganapolsky
    Recently I came across a problem at work that made me rethink Django's models. The app I am developing resides on a Linux server. Its a simple model/view/controller app that involves user interaction and updating data in the database. The problem is that this data resides in a MS SQL database on a Windows machine. So in order to use Django's models, I would have to leverage an ODBC driver on linux, and the use a python add-on like pyodbc. Well, let me tell you, setting up a reliable and functional ODBC connection on linux is no easy feat! So much so, that I spent several hours maneuvering this on my CentOS with no luck, and was left with frustration and lots of dumb system errors. In the meantime I have a deadline to meet, and suddenly the very agile and rapid Django application is a roadblock rather than a pleasure to work with. Someone on my team suggested writing this app in .NET. But there are a few problems with that: it won't be deployable on a linux machine, and I won't be able to work on it since I don't know ASP.net. Then a much better suggestion was made: keep the app in django, but instead of using models, do straight up ajax/web services calls in the template. And then it dawned on me - what a great idea. Django's models seem like a nuissance and hindrance in this case, and I can just have someone else write .Net services on their side, that I can call from my template. As a result my app will be leaner and more compact. So, I was wondering if you guys ever came across a similar dillema and what you decided to do about it.

    Read the article

  • BeginInvoke on ObservableCollection not immediate.

    - by Padu Merloti
    In my code I subscribe to an event that happens on a different thread. Every time this event happens, I receive an string that is posted to the observable collection: Dispatcher currentDispatcher = Dispatcher.CurrentDispatcher; var SerialLog = new ObservableCollection<string>(); private void hitStation_RawCommandSent(object sender, StringEventArgs e) { string command = e.Value.Replace("\r\n", ""); Action dispatchAction = () => SerialLog.Add(command); currentDispatcher.BeginInvoke(dispatchAction, DispatcherPriority.Render); } The code below is in my view model (could be in the code behind, it doesn't matter in this case). When I call "hitstation.PrepareHit", the event above gets called a couple times, then I wait and call "hitStation.HitBall", and the event above gets called a couple more times. private void HitBall() { try { try { Mouse.OverrideCursor = Cursors.Wait; //prepare hit hitStation.PrepareHit(hitSpeed); Thread.Wait(1000); PlayWarning(); //hit hitStation.HitBall(hitSpeed); } catch (TimeoutException ex) { MessageBox.Show("Timeout hitting ball: " + ex.Message); } } finally { Mouse.OverrideCursor = null; } } The problem I'm having is that the ListBox that is bound to my SerialLog gets updated only when the HitBall method finishes. I was expecting seeing a bunch of updates from the PrepareHit, a pause and then a bunch more updates from the HitBall. I've tried a couple of DispatcherPriority arguments, but they don't seem to have any effect.

    Read the article

  • Using PLINQ to calculate and update values within the enclosure does not work

    - by Keith
    I recently needed to do a running total on a report. Where for each group, I order the rows and then calculate the running total based on the previous rows within the group. Aha! I thought, a perfect use case for PLINQ! However, when I wrote up the code, I got some strange behavior. The values I was modifying showed as being modified when stepping through the debugger, but when they were accessed they were always zero. Sample Code: class Item { public int PortfolioID; public int TAAccountID; public DateTime TradeDate; public decimal Shares; public decimal RunningTotal; } List<Item> itemList = new List<Item> { new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 5.335m, }, new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -2.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 7.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -3.335m, }, }; var found = (from i in itemList where i.TAAccountID == 1 select new Item { TAAccountID = i.TAAccountID, PortfolioID = i.PortfolioID, Shares = i.Shares, TradeDate = i.TradeDate, RunningTotal = 0 }); found.AsParallel().ForAll(x => { var prevItems = found.Where(i => i.PortfolioID == x.PortfolioID && i.TAAccountID == x.TAAccountID && i.TradeDate <= x.TradeDate); x.RunningTotal = prevItems.Sum(s => s.Shares); }); foreach (Item i in found) { Console.WriteLine("Running total: {0}", i.RunningTotal); } Console.ReadLine(); If I change the select for found to be .ToArray(), then it works fine and I get calculated reuslts. Any ideas what I am doing wrong?

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Webfaction apache + mod_wsgi + django configuration issue

    - by Dmitry Guyvoronsky
    A problem that I stumbled upon recently, and, even though I solved it, I would like to hear your opinion of what correct/simple/adopted solution would be. I'm developing website using Django + python. When I run it on local machine with "python manage.py runserver", local address is http://127.0.0.1:8000/ by default. However, on production server my app has other url, with path - like "http://server.name/myproj/" I need to generate and use permanent urls. If I'm using {% url view params %}, I'm getting paths that are relative to / , since my urls.py contains this urlpatterns = patterns('', (r'^(\d+)?$', 'myproj.myapp.views.index'), (r'^img/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/img' }), (r'^css/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/css' }), ) So far, I see 2 solutions: modify urls.py, include '/myproj/' in case of production run use request.build_absolute_uri() for creating link in views.py or pass some variable with 'hostname:port/path' in templates Are there prettier ways to deal with this problem? Thank you. Update: Well, the problem seems to be not in django, but in webfaction way to configure wsgi. Apache configuration for application with URL "hostname.com/myapp" contains the following line WSGIScriptAlias / /home/dreamiurg/webapps/pinfont/myproject.wsgi So, SCRIPT_NAME is empty, and the only solution I see is to get to mod_python or serve my application from root. Any ideas?

    Read the article

  • On-Demand thumbnail creation with django and nginx

    - by sharjeel
    I want to generate thumbnails of images on the fly. My site is built with django and deployed using nginx which serves all the static content and communicates with django/apache using reverse proxy. Right now, for every image in my site, I generate all required sizes of thumbnails on-hand and deliver them when required. The problem is that whenever I change the size of a thumbnail, I have to regenerate all of them (and they are tons). However now I'd like to generate the thumbnail the first time it is accessed and later on nginx would deliver the same file over n over. If I delete that thumbnail file because of lesser accesses, it should get generated automatically the next time. Thumbnails in my case also have watermarks which require some computation logic of my application so a webserver thumbnail module might not work very well. The size of the thumbnail can be embedded in the URL. So http://www.example.com/thumbnail/abc_320x240.jpg gets the 320x240 size of the thumbnail. The approach I'm looking right now is to let nginx lookup the file and if it doesn't exist, forward the query to my django application which would create the thumbnail and send either the response or a redirect string. However I'm not sure about the concurrency issues and any other issues which might pop up later. What is the appropriate way to achieve this?

    Read the article

  • Searching a column containing CSV data in a MySQL table for existence of input values

    - by Adarsh R
    Hi, I have a table say, ITEM, in MySQL that stores data as follows: ID FEATURES -------------------- 1 AB,CD,EF,XY 2 PQ,AC,A3,B3 3 AB,CDE 4 AB1,BC3 -------------------- As an input, I will get a CSV string, something like "AB,PQ". I want to get the records that contain AB or PQ. I realized that we've to write a MySQL function to achieve this. So, if we have this magical function MATCH_ANY defined in MySQL that does this, I would then simply execute an SQL as follows: select * from ITEM where MATCH_ANY(FEAURES, "AB,PQ") = 0 The above query would return the records 1, 2 and 3. But I'm running into all sorts of problems while implementing this function as I realized that MySQL doesn't support arrays and there's no simple way to split strings based on a delimiter. Remodeling the table is the last option for me as it involves lot of issues. I might also want to execute queries containing multiple MATCH_ANY functions such as: select * from ITEM where MATCH_ANY(FEATURES, "AB,PQ") = 0 and MATCH_ANY(FEATURES, "CDE") In the above case, we would get an intersection of records (1, 2, 3) and (3) which would be just 3. Any help is deeply appreciated. Thanks

    Read the article

  • What is a good way to assign order #s to ordered rows in a table in Sybase

    - by DVK
    I have a table T (structure below) which initially contains all-NULL values in an integer order column: col1 varchar(30), col2 varchar(30), order int NULL I also have a way to order the "colN" columns, e.g. SELECT * FROM T ORDER BY some_expression_involving_col1_and_col2 What's the best way to assign - IN SQL - numeric order values 1-N to the order table, so that the order values match the order of rows returned by the above ORDER BY? In other words, I would like a single query (Sybase SQL syntax so no Oracle's rowcount) which assigns order values so that SELECT * FROM T ORDER BY order returns 100% same order of rows as the query above. The query does NOT necessarily need to update the table T in place, I'm OK with creating a copy of the table T2 if that'll make the query simpler. NOTE1: A solution must be real query or a set of queries, not involving a loop or a cursor. NOTE2: Assume that the data is uniquely orderable according to the order by above - no need to worry about situation when 2 rows can be assigned the same order at random. NOTE3: I would prefer a generic solution, but if you wish a specific example of ordering expression, let's say: SELECT * FROM T ORDER BY CASE WHEN col1="" THEN "AAAAAA" ELSE col1 END, ISNULL(col2, "ZZZ")

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >