Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 577/724 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Error setting env thru subprocess.call to run a python script on a remote linux machine

    - by John Smith
    I am running a python script on a windows machine to invoke another python script on a remote linux machine. I am using subprocess.call with ssh to do this, like below: subprocess.call('ssh -i <identify file> username@hostname python <script_on_linux_machine>') and this works fine. However, if I want to set some environment variables, like below: subprocess.call('ssh -i <identify file> username@hostname python <script_on_linux_machine>', env={key1:value1}) it fails. I get the following error: ssh_connect: getnameinfo failed ssh: connect to host <hostname> port 22: Operation not permitted 255 I've tried splitting the ssh commands into list and passing. Didn't help. I've tried to run other 'local'(windows) commands thru subprocess.call() and tried setting the env. It works fine. I've tried to run other commands(such as ls) on the remote linux machine. Again, subprocess.call() works fine, as long as I don't try to set the environment. What am I doing wrong? Would I be able to set the environment for a python script on a remote machine? Any help will be appreciated.

    Read the article

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

  • Visual Studio 2010 / ASP.NET MVC / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Hibernate query for multiple items in a collection

    - by aarestad
    I have a data model that looks something like this: public class Item { private List<ItemAttribute> attributes; // other stuff } public class ItemAttribute { private String name; private String value; } (this obviously simplifies away a lot of the extraneous stuff) What I want to do is create a query to ask for all Items with one OR MORE particular attributes, ideally joined with arbitrary ANDs and ORs. Right now I'm keeping it simple and just trying to implement the AND case. In pseudo-SQL (or pseudo-HQL if you would), it would be something like: select all items where attributes contains(ItemAttribute(name="foo1", value="bar1")) AND attributes contains(ItemAttribute(name="foo2", value="bar2")) The examples in the Hibernate docs didn't seem to address this particular use case, but it seems like a fairly common one. The disjunction case would also be useful, especially so I could specify a list of possible values, i.e. where attributes contains(ItemAttribute(name="foo", value="bar1")) OR attributes contains(ItemAttribute(name="foo", value="bar2")) -- etc. Here's an example that works OK for a single attribute: return getSession().createCriteria(Item.class) .createAlias("itemAttributes", "ia") .add(Restrictions.conjunction() .add(Restrictions.eq("ia.name", "foo")) .add(Restrictions.eq("ia.attributeValue", "bar"))) .list(); Learning how to do this would go a long ways towards expanding my understanding of Hibernate's potential. :)

    Read the article

  • PHP: MVC and Model

    - by Pirkka
    Hello I`ve been wondering this one thing about creating models. If I make for example Page model. Is it the both: It can retrieve one row from the table or all the rows. Somehow Im mixing the objects and the database. I have thought it like this: I would have to make a Page-class that would represent one row in the table. It also would have all the basic CRUD-methods. Then I would have to do a Pages-class (somekind of collection) that would retrieve rows from the table and instantiate a Page object from each row. Is this kind of weird? If someone could explain to me the idea of model throughout.. Im again confused. Maybe Im thinking the whole OOP too difficult.. And by the way this forum is great. Hopefully people will just understand my problems. Heh. I was a long time procedural style programmer and now in 3 months I have dived into OOP and MVC and PHP frameworks and I just get more excited day by day when I explore this stuff!

    Read the article

  • Can I specify the files to commit in subversion in a file rather than on the command line?

    - by René Nyffenegger
    I have renamed (with svn move) a lot of files in a subversion project. Now, I am trying to commit these on Window's cmd.exe. It seems that I hit a limit (probably by cmd.exe) in that the number of files is too long for the command line to swallow. Now, I thought and hoped that I could list the files to commit in a seperate file that I could specify with the commit command (something like svn ci --files-to-commit=renamed-files.txt -m "Renamed a lot of files" Yet, either such an option does not exist or I am unable to find this. Unfortunately, I cannot do a svn ci . as I have done other changes in the project as well. Neither can I do a svn ci *pattern-of-renamed-files* since this would only check in the added files, not the deleted ones. Before I start checking in the files with smaller chunks of files to check in (and thus increase the revision number uneccesserily without giving a hint as to the 'atomicity' of the operation) I thought I ask if this is indeed impossible to do.

    Read the article

  • How do I assert that two arbitrary type objects are equivalent, without requiring them to be equal?

    - by Tomas Lycken
    To accomplish this (but failing to do so) I'm reflecting over properties of an expected and actual object and making sure their values are equal. This works as expected as long as their properties are single objects, i.e. not lists, arrays, IEnumerable... If the property is a list of some sort, the test fails (on the Assert.AreEqual(...) inside the for loop). public void WithCorrectModel<TModelType>(TModelType expected, string error = "") where TModelType : class { var actual = _result.ViewData.Model as TModelType; Assert.IsNotNull(actual, error); Assert.IsInstanceOfType(actual, typeof(TModelType), error); foreach (var prop in typeof(TModelType).GetProperties()) { Assert.AreEqual(prop.GetValue(expected, null), prop.GetValue(actual, null), error); } } If dealing with a list property, I would get the expected results if I instead used CollectionAssert.AreEquivalent(...) but that requires me to cast to ICollection, which in turn requries me to know the type listed, which I don't (want to). It also requires me to know which properties are list types, which I don't know how to. So, how should I assert that two objects of an arbitrary type are equivalent? Note: I specifically don't want to require them to be equal, since one comes from my tested object and one is built in my test class to have something to compare with.

    Read the article

  • ClassLoader exceptions being memoized

    - by Jim
    Hello, I am writing a classloader for a long-running server instance. If a user has not yet uploaded a class definition, I through a ClassNotFoundException; seems reasonable. The problem is this: There are three classes (C1, C2, and C3). C1 depends on C2, C2 depends on C3. C1 and C2 are resolvable, C3 isn't (yet). C1 is loaded. C1 subsequently performs an action that requires C2, so C2 is loaded. C2 subsequently performs an action that requires C3, so the classloader attempts to load C3, but can't resolve it, and an exception is thrown. Now C3 is added to the classpath, and the process is restarted (starting from the originally-loaded C1). The issue is, C2 seems to remember that C3 couldn't be loaded, and doesn't bother asking the classloader to find the class... it just re-throws the memoized exception. Clearly I can't reload C1 or C2 because other classes may have linked to them (as C1 has already linked to C2). I tried throwing different types of errors, hoping the class might not memoize them. Unfortunately, no such luck. Is there a way to prevent the loaded class from binding to the exception? That is, I want the classloader to be allowed to keep trying if it didn't succeed the first time. Thanks!

    Read the article

  • Qt and variadic functions

    - by Noah Roberts
    OK, before lecturing me on the use of C-style variadic functions in C++...everything else has turned out to require nothing short of rewriting the Qt MOC. What I'd like to know is whether or not you can have a "slot" in a Qt object that takes an arbitrary amount/type of arguments. The thing is that I really want to be able to generate Qt objects that have slots of an arbitrary signature. Since the MOC is incompatible with standard preprocessing and with templates, it's not possible to do so with either direct approach. I just came up with another idea: struct funky_base : QObject { Q_OBJECT funky_base(QObject * o = 0); public slots: virtual void the_slot(...) = 0; }; If this is possible then, because you can make a template that is a subclass of a QObject derived object so long as you don't declare new Qt stuff in it, I should be able to implement a derived templated type that takes the ... stuff and turns it into the appropriate, expected types. If it is, how would I connect to it? Would this work? connect(x, SIGNAL(someSignal(int)), y, SLOT(the_slot(...))); If nobody's tried anything this insane and doesn't know off hand, yes I'll eventually try it myself...but I am hoping someone already has existing knowledge I can tap before possibly wasting my time on it.

    Read the article

  • C# wrapper for array of three pointers

    - by fergs
    I'm currently working on a C# wrapper to work with Dallmeier Common API light. See previous posting: http://stackoverflow.com/questions/2430089/c-wrapper-and-callbacks I've got pretty much everything 'wrapped' but I'm stuck on wrapping a callback which contains an array of three pointers & an array integers: dlm_setYUVDataCllback int(int SessionHandle, void (*callback) (long IPlayerID, unsigned char** yuvData, int* pitch, int width, int height, int64_t ts, char* extData)) Function Set callback, to receive current YUV image. Arguments SessionHandle: handle to current session. Return PlayerID (see callback). Callback - IPlayerId: id to the Player object - yuvData: array of three pointers to Y, U and V part of image The YUV format used is YUV420 planar (not packed). char *y = yuvData[0]; char *u = yuvData[1]; char *v = yuvData[2]; - pitch: array of integers for pitches for Y, U and V part of image - width: intrinsic width of image. - height - ts : timestamp of current frame - extData: additional data to frame How do I go about wrapping this in c#? Any help is much appreciated.

    Read the article

  • why won't Eclipse use the compiler I specify for my project?

    - by codeman73
    I'm using Eclipse 3.3. In my project, I've set the compiler compliance level to 5.0 In the build path for the project. I've added the Java 1.5 JDK in the Installed JREs section and am referencing that System Library in my project build path. However, I'm getting compile errors for a class that implements PreparedStatement for not implementing abstract methods that only exist in Java 1.6 PreparedStatement. Specifically, the methods setAsciiStream(int, InputStream, long) and setAsciiStream(int, InputStream) Strangely enough, it worked when we were compiling it against Java 1.4, which it was originally written for. We added the JREs for Java 1.4 and referenced that system library in the project, and set the project's compiler level to 1.4, and it works fine. But when I do the same changes to try to point to Java 5.0, it instead uses Java 6. Any ideas why? I wrote a similar question earlier, here: http://stackoverflow.com/questions/2540548/how-do-i-get-eclipse-to-use-a-different-compiler-version-for-java I know how you're supposed to choose a different compiler but it seems Eclipse isn't taking it. It seems to be defaulting to Java 6, even though I have deleted all Java 6 JDKs and JREs that I could find. I've also updated the -vm option in my eclipse.ini to point to the Java5 JDK.

    Read the article

  • MySQL INJECTION Solution...

    - by Val
    I have been bothered for so long by the MySQL injections and was thinking of a way to eliminate this problem all together. I have came up with something below hope that many people will find this useful. The only Draw back I can think of this is the partial search: Jo =returns "John" by using the like %% statement. Here is a php solution: <?php function safeQ(){ $search= array('delete','select');//and every keyword... $replace= array(base64_encode('delete'),base64_encode('select')); foreach($_REQUEST as $k=>$v){ str_replace($search, $replace, $v); } } foo(); function html($str){ $search= array(base64_encode('delete'),base64_encode('select')); $replace= array('delete','select');//and every keyword... str_replace($search, $replace, $str); } //example 1 ... ... $result = mysql_fetch_array($query); echo html($result[0]['field_name']); //example 2 $select = 'SELECT * FROM safeQ($_GET['query']) '; //example 3 $insert = 'INSERT INTO .... value(safeQ($_GET['query']))'; ?> I know, I know that you still could inject using 1=1 or any other type of injections... but this I think could solve half of your problem so the right mysql query is executed. So my question is if anyone can find any draw backs on this then please feel free to comment here. PLEASE GIVE AN ANSWER only if you think that this is a very useful solution and no major drawbacks are found OR you think is a bad idea all together...

    Read the article

  • Question about making Asynchronous call in C# (WPF) to COM object

    - by Andrew
    Hi, Sorry to ask such a basic question but I seem to have a brain freeze on this one! I'm calling a COM (ATL) object from my WPF project. The COM method might take a long time to complete. I thought I'd try and call it asychronously. I have a few demo lines that show the problem. private void checkBox1_Checked(object sender, RoutedEventArgs e) { //DoSomeWork(); AsyncDoWork caller = new AsyncDoWork(DoSomeWork); IAsyncResult result = caller.BeginInvoke(null, null); } private delegate void AsyncDoWork(); private void DoSomeWork() { _Server.DoWork(); } The ATL method DoWork is very exciting. It is: STDMETHODIMP CSimpleObject::DoWork(void) { Sleep(5000); return S_OK; } I had expectations that running this way would result in the checkbox being checked right away (instead of in 5 seconds) and me being able to move the WPF gui around the screen. I can't - for 5 seconds. What am I doing wrong? I'm sure it's something pretty simple. Delegate signature wrong? Thanks.

    Read the article

  • P values in wilcox.test gone mad :(

    - by Error404
    I have a code that isn't doing what it should do. I am testing P value for a wilcox.test for a huge set of data. the code i am using is the following library(MASS) data1 <- read.csv("file1path.csv",header=T,sep=",") data2 <- read.csv("file2path.csv",header=T,sep=",") data3 <- read.csv("file3path.csv",header=T,sep=",") data4 <- read.csv("file4path.csv",header=T,sep=",") data1$K <- with(data1,{"N"}) data2$K <- with(data2,{"E"}) data3$K <- with(data3,{"M"}) data4$K <- with(data4,{"U"}) new=rbind(data1,data2,data3,data4) i=3 for (o in 1:4800){ x1 <- data1[,i] x2 <- data2[,i] x3 <- data3[,i] x4 <- data4[,i] wt12 <- wilcox.test(x1,x2, na.omit=TRUE) wt13 <- wilcox.test(x1,x3, na.omit=TRUE) wt14 <- wilcox.test(x1,x4, na.omit=TRUE) if (wt12$p.value=="NaN"){ print("This is wrong") } else if (wt12$p.value < 0.05){ print(wt12$p.value) mypath=file.path("C:", "all1-less-05", (paste("graph-data1-data2",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt13$p.value=="NaN"){ print("This is wrong") } else if (wt13$p.value < 0.05){ print(wt13$p.value) mypath=file.path("C:", "all2-less-05", (paste("graph-data1-data3",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt14$p.value=="NaN"){ print("This is wrong") } else if (wt14$p.value < 0.05){ print(wt14$p.value) mypath=file.path("C:", "all3-less-05", (paste("graph-data1-data4",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } i=i+1 } I am having 2 problems with this long command: 1- Without specifying a certain P value, the code gives me arouind 14,000 graphs, when specifying a p value less than 0.05 the number of graphs generated goes down to 9,0000. THE FIRST PROBLEM IS: Some P value are more than 0.05 and are still showing up! 2- I designed the program to give me a result of "This is wrong" when the Value of P is "NaN", I am getting results of "NaN" Here's a screenshot from the results do you know what the mistake i made with the command to get these errors? Thanks in advance

    Read the article

  • Architecting ASP.net MVC App to use repositories and services

    - by zaladane
    Hello, I recently started reading about ASP.net MVC and after getting excited about the concept, i started to migrate all my webform project to MVC but i am having a hard time keeping my controller skinny even after following all the good advices out there (or maybe i just don't get it ... ). The website i deal with has Articles, Videos, Quotes ... and each of these entities have categories, comments, images that can be associated with it. I am using Linq to sql for database operations and for each of these Entities, i have a Repository, and for each repository, i create a service to be used in the controller. so i have - ArticleRepository ArticleCategoryRepository ArticleCommentRepository and the corresponding service ArticleService ArticleCategoryService ... you see the picture. The problem i have is that i have one controller for article,category and comment because i thought that having ArticleController handle all of that might make sense, but now i have to pass all of the services needed to the Controller constructor. So i would like to know what it is that i am doing wrong. Are my services not designed properly? should i create Bigger service to encapsulate smaller services and use them in my controller? or should i have an articleCategory Controller and an articleComment Controller? A page viewed by the user is made of all of that, thee article to be viewed,the comments associated with it, a listing of the categories to witch it applies ... how can i efficiently break down the controller to keep it "skinny" and solve my headache? Thank you! I hope my question is not too long to be read ...

    Read the article

  • Jquery UI Dialog - when opened IE7 Browser moves instantly to the bottom of the page

    - by Truegilly
    Hello, i have been working on a new .net MVC site and have integrated some of the awesome jquery UI components. ive been testing it in IE8, FF, opera and Chrome and all looks well. Once I test in IE7, surprisingly its the dialogs that are causing a problem. basically what’s happening is that one you user clicks to open a dialog the page will scroll immediately to the bottom of the page. This is especially bad if the page is quite long. this only happens in IE7 (and probably 6 but im not even going there!). I have spend a few hours reading forums and it seems im not the only one. I have created a dirty hack which im not keen on but it does work. onclick="SignIn(); <% if(ModelHelperClass.CheckForOldIEVersion() == true) Response.Write("window.scrollTo(0, 0);"); %> return false;"> has anyone else had this issue and resolved it without resorting to dirty hacks ? im using jquery-ui-1.8.custom.min.js and jquery-1.4.2.min.js any help is most appreciated Truegilly

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish Error

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • How can get dtrace to run the traced command with non-root priviledges ?

    - by Gyom
    OS X lacks linux's strace, but it has dtrace which is supposed to be so much better. However, I miss the ability to do simple tracing on individual commands. For example, on linux I can write strace -f gcc hello.c to caputre all system calls, which gives me the list of all the filenames needed by the compiler to compile my program (the excellent memoize script is built upon this trick) I want to port memoize on the mac, so I need some kind of strace. What I actually need is the list of files gcc reads and writes into, so what I need is more of a truss. Sure enough can I say dtruss -f gcc hello.c and get somewhat the same functionality, but then the compiler is run with root priviledges, which is obviously undesirable (apart from the massive security risk, one issue is that the a.out file is now owned by root :-) I then tried dtruss -f sudo -u myusername gcc hello.c, but this feels a bit wrong, and does not work anyway (I get no a.out file at all this time, not sure why) All that long story tries to motivate my original question: how do I get dtrace to run my command with normal user priviledges, just like strace does in linux ? Edit: is seems that I'm not the only one wondering how to do this: question #1204256 is pretty much the same as mine (and has the same suboptimal sudo answer :-)

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • JSF how to temporary disable validators to save draft

    - by Swiety
    I have a pretty complex form with lots of inputs and validators. For the user it takes pretty long time (even over an hour) to complete that, so they would like to be able to save the draft data, even if it violates rules like mandatory fields being not typed in. I believe this problem is common to many web applications, but can't find any well recognised pattern how this should be implemented. Can you please advise how to achieve that? For now I can see the following options: use of immediate=true on "Save draft" button doesn't work, as the UI data would not be stored on the bean, so I wouldn't be able to access it. Technically I could find the data in UI component tree, but traversing that doesn't seem to be a good idea. remove all the fields validation from the page and validate the data programmaticaly in the action listener defined for the form. Again, not a good idea, form is really complex, there are plenty of fields so validation implemented this way would be very messy. implement my own validators, that would be controlled by some request attribute, which would be set for standard form submission (with full validation expected) and would be unset for "save as draft" submission (when validation should be skipped). Again, not a good solution, I would need to provide my own wrappers for all validators I am using. But as you see no one is really reasonable. Is there really no simple solution to the problem?

    Read the article

  • Best practice PHP Form Action

    - by Rob
    Hi there i've built a new script (from scratch not a CMS) and i've done alot of work on reducing memory usage and the time it takes for the page to be displayed (caching HTML etc) There's one thing that i'm not sure about though. Take a simple example of an article with a comments section. If the comment form posts to another page that then redirects back to the article page I won't have the problem of people clicking refresh and resending the information. However if I do it that way, I have to load up my script twice use twice as much memory and it takes twice as long whilst i'm still only displaying the page once. Here's an example from my load log. The first load of the article is from the cache, the second rebuilds the page after the comment is posted. Example 1 0 queries using 650856 bytes of memory in 0.018667 - domain.com/article/1/my_article.html 9 queries using 1325723 bytes of memory in 0.075825 - domain.com/article/1/my_article/newcomment.html 0 queries using 650856 bytes of memory in 0.029449 - domain.com/article/1/my_article.html Example 2 0 queries using 650856 bytes of memory in 0.023526 - domain.com/article/1/my_article.html 9 queries using 1659096 bytes of memory in 0.060032 - domain.com/article/1/my_article.html Obviously the time fluctuates so you can't really compare that. But as you can see with the first method I use more memory and it takes longer to load. BUT the first method avoides the refresh problem. Does anyone have any suggestions for the best approach or for alternative ways to avoid the extra load (admittadely minimal but i'd still like to avoid it) whilst also avoiding the refresh problem?

    Read the article

  • I need a true mouseOver...

    - by invertedSpear
    Ok, so mouseOver and RollOver, and their respective outs work great as long as your mouse is actually over the item, or one of it's children. My problem is that I may have another UI component "between" my mouse and the item I want to process the mouse/rollover(maybe a button that is on top of a canvas, but is not a child of the canvas). The mouse is still over the component, there's just something else that it's over at the same time. Any tips or help how to deal with this? Let me know if I'm not being clear enough. Here is a simplified code example detailing my question copy/paste that into your flex/flash builder and you'll see what I mean: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" width="500" height="226" creationComplete="ccInit()"> <mx:Script> <![CDATA[ private function ccInit():void{ myCanv.addEventListener(MouseEvent.ROLL_OVER,handleRollOver); } private function handleRollOver(evt:MouseEvent):void{ myCanv.setStyle("backgroundAlpha",1); myCanv.addEventListener(MouseEvent.ROLL_OUT,handleRollOut); } private function handleRollOut(evt:MouseEvent):void{ myCanv.setStyle("backgroundAlpha",0); myCanv.removeEventListener(MouseEvent.ROLL_OUT,handleRollOut); } ]]> </mx:Script> <mx:Canvas id="myCanv" x="10" y="10" width="480" height="200" borderStyle="solid" borderColor="#000000" backgroundColor="#FFFFFF" backgroundAlpha="0"> </mx:Canvas> <mx:Button x="90" y="50" label="Button" width="327" height="100"/> </mx:Application>

    Read the article

  • Android map overly transparency has unwanted gradient

    - by DavidP2190
    I'm a making my first android app and for part of it I need some shaded areas over a mapview. I've got this working so far but the when I set the alpha of the fill colour, it goes strange. I'm not allowed to post images yet, but you can see what happens here: http://i.imgur.com/leXmc.png I'm not sure what causes it, but here is some code from where it gets drawn. public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Paint paint = new Paint(); paint.setStrokeWidth(2); paint.setColor(android.graphics.Color.GREEN); paint.setStyle(Paint.Style.FILL_AND_STROKE); paint.setAlpha(25); screenPoints = new Point[10]; Path path = new Path(); for (int x = 0; x<greenZone.length; x++){ screenPoints[x] = new Point(); mapView.getProjection().toPixels(greenZone[x], screenPoints[x]); } path.moveTo(screenPoints[0].x, screenPoints[0].y); for (int x = 1; x < screenPoints.length-1; x++){ paint.setAlpha(25); if (x<9){ path.lineTo(screenPoints[x].x, screenPoints[x].y); canvas.drawPath(path, paint); } else{ path.moveTo(screenPoints[screenPoints.length-1].x, screenPoints[screenPoints.length-1].y); path.lineTo(screenPoints[0].x, screenPoints[0].y); canvas.drawPath(path, paint); } } return true; } Thanks.

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >