Search Results

Search found 14764 results on 591 pages for 'interview questions'.

Page 514/591 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • Initializing own plugins

    - by jgauffin
    I've written a few jquery plugins for my client. I want to write a function which would initialize each plugin which have been loaded. Example: <script src="/Scripts/jquery.plugin1.min.js")" type="text/javascript"></script> <script src="/Scripts/jquery.plugin2.min.js")" type="text/javascript"></script> <script src="/Scripts/jquery.initializer.min.js")" type="text/javascript"></script> Here I've loaded plugin1 and plugin2. Plugin1 should be attached to all links with class name 'ajax' and plugin2 to all divs with class name 'test2': $('document').ready(function(){ $('a.ajax').plugin1(); $('div.test2').plugin2(); } I know that I can use jQuery().pluginName to check if a plugin exists. But I want to have a leaner way. Can all loaded plugins register a initialize function in an array or something like that which I in document.ready can iterate through and invoke? like: //in plugin1.js myCustomPlugins['plugin1'] = function() { $('a.ajax').plugin1(); }; // and in the initializer script: myCustomPlugins.each(function() { this(); }; Guess it all boiled down to three questions: How do I use a jquery "global" array? How do I iterate through that array to invoke registered methods Are there a better approach?

    Read the article

  • How can I determine the "correct" number of steps in a questionnaire where branching is used?

    - by Mike Kingscott
    I have a potential maths / formula / algorithm question which I would like help on. I've written a questionnaire application in ASP.Net that takes people through a series of pages. I've introduced conditional processing, or branching, so that some pages can be skipped dependent on an answer, e.g. if you're over a certain age, you will skip the page that has Teen Music Choice and go straight to the Golden Oldies page. I wish to display how far along the questionnaire someone is (as a percentage). Let's say I have 10 pages to go through and my first answer takes me straight to page 9. Technically, I'm now 90% of the way through the questionnaire, but the user can think of themselves as being on page 2 of 3: the start page (with the branching question), page 9, then the end page (page 10). How can I show that I'm on 66% and not 90% when I'm on page 9 of 10? For further information, each Page can have a number of questions on that can have one or more conditions on them that will send the user to another page. By default, the next page will be the next one in the collection, but that can be over-ridden (e.g. entire sets of pages can be skipped). Any thoughts? :-s

    Read the article

  • Has anyone ever had OpenCV work with Python 2.7 on MacOS 10.6?

    - by ?????
    I've been trying on and off for the past 6 months to get OpenCV to work with Python on MacOS. Every time there's a new release, I try again and fail. I've tried both 64-bit and 32-bit, and both the xcode gcc and gcc installed via macports. I just spend the past two days on it, hopeful that the latest OpenCV release, that appears to include Python support directly would work. It doesn't. I've also tried and failed to use this: http://code.google.com/p/pyopencv/ I've been using OpenCV with C++ or Microsoft C++/CLI for the past few years, but I'd love to use it with Python on a Mac because that is my primary development environment. I'd love to hear from anyone who's actually been able to get the opencv python examples to run under Mac OS 10.6, either 32 or 64-bit. My last attempt was to follow the instructions on this page http://recursive-design.com/blog/2010/12/14/face-detection-with-osx-and-python/ with a clean, fresh install of 10.6 on a 64-bit capable Mac. My PYTHONPATH is set, and I can see the cv library in it. But an "import cv" from python fails. Previously, the closest I've ever gotten (again, staring on a clean, fresh 10.6 install) was this: Python 2.7.1 (r271:86882M, Nov 30 2010, 10:35:34) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cv Fatal Python error: Interpreter not initialized (version mismatch?) Abort trap thrilllap-2:~ swirsky$ I've seen a lot of folks answering similar questions here, but have never seen an definitive answer for it.

    Read the article

  • Getting tree construction with ANTLR

    - by prosseek
    As asked and answered in http://stackoverflow.com/questions/2999755/removing-left-recursion-in-antlr , I could remove the left recursion E - E + T|T T - T * F|F F - INT | ( E ) After left recursion removal, I get the following one E - TE' E' - null | + TE' T - FT' T' - null | * FT' Then, how to make the tree construction with the modified grammar? With the input 1+2, I want to have a tree ^('+' ^(INT 1) ^(INT 2)). Or similar. grammar T; options { output=AST; language=Python; ASTLabelType=CommonTree; } start : e - e ; e : t ep - ??? ; ep : | '+' t ep - ??? ; t : f tp - ??? ; tp : | '*' f tp - ??? ; f : INT | '(' e ')' - e ; INT : '0'..'9'+ ; WS: (' '|'\n'|'\r')+ {$channel=HIDDEN;} ;

    Read the article

  • Rationale of C# iterators design (comparing to C++)

    - by macias
    I found similar topic: http://stackoverflow.com/questions/56347/iterators-in-c-stl-vs-java-is-there-a-conceptual-difference Which basically deals with Java iterator (which is similar to C#) being unable to going backward. So here I would like to focus on limits -- in C++ iterator does not know its limit, you have by yourself compare the given iterator with the limit. In C# iterator knows more -- you can tell without comparing to any external reference, if the iterator is valid or not. I prefer C++ way, because once having iterator you can set any iterator as a limit. In other words if you would like to get only few elements instead of entire collection, you don't have to alter the iterator (in C++). For me it is more "pure" (clear). But of course MS knew this and C++ while designing C#. So what are the advantages of C# way? Which approach is more powerful (which leads to more elegant functions based on iterators). What do I miss? If you have thoughts on C# vs. C++ iterators design other than their limits (boundaries), please also answer. Note: (just in case) please, keep the discussion strictly technical. No C++/C# flamewar.

    Read the article

  • Beginner Ques::How to delete records permanently in case of linked tables?

    - by Serenity
    Let's say I have these 2 tables QuesType and Ques:- QuesType QuestypeID|QuesType |Active ------------------------------------ 101 |QuesType1 |True 102 |QuesType2 |True 103 |XXInActiveXX |False Ques QuesID|Ques|Answer|QUesTypeID|Active ------------------------------------ 1 |Ques1|Ans1 |101 |True 2 |Ques2|Ans2 |102 |True 3 |Ques3|Ans3 |101 |True In the QuesType Table:- QuesTypeID is a Primary key In the Ques Table:- QuesID is a Primary key and QuesType ID is the Foreign Key that refernces QuesTypeID from QuesType Table Now I am unable to delete records from QuesType Table, I can only make QuesType inactive by setting Active=False. I am unable to delete QuesTypes permanently because of the Foreign key relation it has with Ques Table. So , I just set the column Active=false and those Questypes then don't show on my grid wen its binded. What I want to do is be able to delete any QuesType permamnently. Now it can only be deleted if its not being used anywhere in the Ques table, right? So to delete any QuesType permanently I thot this is what I could do:- In the grid that displays QuesTypes, I have this check box for Active and a button for delete.What I thot was, when a user makes some QuesType inactive then OnCheckChanged() event will run and that will have the code to delete all the Questions in Ques table that are using that QuesTypeID. Then on the QuesType grid, that QuesType would show as Deactivated and only then can a user delete it permanently. Am I thinking correctly? Currently in my DeleteQuesType Stored Procedure what I am doing is:- Setting the Active=false and Setting QuesTye= some string like XXInactiveXX Is there any other way?

    Read the article

  • Efficient Multiplication of Varying-Length #s [Conceptual]

    - by Milan Patel
    Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm. I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this: a * b = a/2 * 2b if a is even a * b = (a-1)/2 * 2b + a if a is odd My pseudocode is: rpa(x, y){ if x is 1 return y if x is even return rpa(x/2, 2y) if x is odd return rpa((x-1)/2, 2y) + y } I have 3 questions: Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically... Can I apply the Master's Theorem to understand the complexity...? a = # subproblems in recursion = 1 (max 1 recursive call across all states) n / b = size of each subproblem = n / 1 - b = 1 (problem doesn't change size...?) f(n^d) = work done outside recursive calls = 1 - d = 0 (the addition when a is odd) a = 1, b^d = 1, a = b^d - complexity is in n^d*log(n) = log(n) this makes sense logically since we are halving the problem at each step, right? What might my professor mean by providing arbitrary length numbers "as strings". Why do that? Many thanks in advance

    Read the article

  • Working with PHP and MySQL - need a good and secure design with OO design

    - by Andrew
    I am new to PHP- first time developer. I am working on my web application and it is nearly done; nevertheless, most of my sql was done directly via code using direct mysql requests. This is the way I approached it: In classes_db.php I declared the db settings and created methods that I use to open and close DB connections. I declare those objects on my regular pages: class classes_db { public $dbserver = 'server; public $dbusername = 'user'; public $dbpassword = 'pass'; public $dbname = 'db'; function openDb() { $dbhandle = mysql_connect($this->dbserver, $this->dbusername, $this->dbpassword); if (!$dbhandle) { die('Could not connect: ' . mysql_error()); } $selected = mysql_select_db($this->dbname, $dbhandle) or die("Could not select the database"); return $dbhandle; } function closeDb($con) { mysql_close($con); } } On my regular page, I do this: <?php require 'classes_db.php'; session_start(); //create instance of the DB class $db = new classes_db(); //get dbhandle $dbhandle = $db->openDb(); //process query $result = mysql_query("update user set username = '" . $usernameFromForm . "' where iduser= " . $_SESSION['user']->iduser); //close the connection if (isset($dbhandle)) { $db->closeDb($dbhandle); } ?> My questions is: how to do it right and make it OO and secure? I know that I need incorporate prepared queries- how to do it the best way? Please provide some code

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Keeping track of dependency revisions

    - by Samaursa
    I have a project with several dependencies that are in various repositories. Each time I commit changes to my project, I make sure I write the revision numbers of all the dependent repositories so that in the event I ever have to come back to this revision (let's call it 5), I can immediately know which revisions of the dependent repositories revision 5 is guaranteed to work with, update the dependencies to the specified revisions, compile and run the project. So for example if I have: Dep1 @ Revisions 10 Dep2 @ Revisions 20 Dep3 @ Revisions 10 Proj @ Revisions 35 And let's say that when Proj was on revision 17, the Dep1 revision was 5, Dep2 revision was 13 and Dep3 revision was 3. So in my SVN logs, I recorded something like this: !! Works with Dep1 Rev 5, Dep2 Rev 13, Dep3 Rev 3 To me this seems primitive and makes me believe that there is a better way to do it. Now in one of my other questions, Ivy Dependency Manager has been recommended. I have not looked at it in detail yet (seems complicated and yet another thing I must learn). To me it seems like the log of SVN (and Mercurial etc.) could have been split into Log and Dependencies (if any) where the latter could be switched off if there were no dependencies (unless of course I am unaware of an easier/better solution). This would allow for a cleaner log that maybe even warned at each new commit to check the previously defined dependencies again and make sure they have not changed. So, I was wondering how everyone manages this situations and if you have any tips, techniques, programs, suggestions that you can offer. Thank you.

    Read the article

  • Why can’t PHP script write a file on server 2008 via command line or task scheduler?

    - by rg89
    I created a question on serverfault.com, and it was recommended that I ask here. http://serverfault.com/questions/140669/why-cant-php-script-write-a-file-on-server-2008-via-command-line-or-task-schedul I have a PHP script. It runs well when I use a browser. It writes an XML file in the same directory. The script takes ~60 seconds to run, and the resulting XML file is ~16 MB. I am running PHP 5.2.13 via FastCGI on Windows Server Web edition SP1 64 bit. The code pulls inventory from SQL server, runs a loop to build an XML file for a third party. I created a task in task scheduler to run c:\php5\php.exe "D:\inetpub\tools\build.php" The task scheduler shows a time lapse of about a minute, which is how long the script takes to run in a browser. No error returned, but no file created. Each time I make a change to the scheduled task properties, a user password box comes up and I enter the administrator account password. If I run this same path and argument at a command line it does not error and does not create the file. When I right click run command prompt as an administrator, the file is still not created. I get my echo statement "file published" that is after the file creation and no error is returned. I am doing a simple fopen fwrite fclose to save the contents of a php variable to a .xml file, and the file only gets created when the script is run through the browser. Here's what happens after the xml-building loop: $feedContent .= "</feed"; sqlsrv_close( $conn ); echo "<p>feed built</p>"; $feedFile = "feed.xml"; $handler = fopen($feedFile, 'w'); fwrite( $handler, $feedContent ); fclose( $handler ); echo "<p>file published</p>"; Thanks

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • GADTs and Scrap your Boilerplate

    - by finnsson
    I'm writing a XML (de)serializer using Text.XML.Light and Scrap your Boilerplate (at http://github.com/finnsson/Text.XML.Generic) and so far I got working code for "normal" ADTs but I'm stuck at deserializing GADTs. I got the GADT data DataBox where DataBox :: (Show d, Eq d, Data d) => d -> DataBox and I'm trying to get this to compile instance Data DataBox where gfoldl k z (DataBox d) = z DataBox `k` d gunfold k z c = k (z DataBox) -- not OK toConstr (DataBox d) = toConstr d dataTypeOf (DataBox d) = dataTypeOf d but I can't figure out how to implement gunfold for DataBox. The error message is Text/XML/Generic.hs:274:23: Ambiguous type variable `b' in the constraints: `Eq b' arising from a use of `DataBox' at Text/XML/Generic.hs:274:23-29 `Show b' arising from a use of `DataBox' at Text/XML/Generic.hs:274:23-29 `Data b' arising from a use of `k' at Text/XML/Generic.hs:274:18-30 Probable fix: add a type signature that fixes these type variable(s) It's complaining about not being able to figure out the data type of b. I'm also trying to implement dataCast1 and dataCast2 but I think I can live without them (i.e. an incorrect implementation). I guess my questions are: Is it possible to combine GADTs with Scrap your Boilerplate? If so: how do you implement gunfold for a GADT?

    Read the article

  • Vector does reallocation on every push_back

    - by Amrish
    IDE - Visual Studio 2008, Visual C++ I have a custom class Class1 with a copy constructor to it. I also have a vector Data is inserted using the following code Class1* objClass1; vector<Class1> vClass1; for(int i=0;i<1000;i++) { objClass1 = new Class1(); vClass1.push_back(*objClass1); delete objClass1; } Now on every insert, the vector gets re-allocated and all the existing contents are copied to new locations. For example, if the vector has 5 elements and if I insert the 6th one, the previous 5 elements along with the new one gets copied to a new location (I figured it out by adding log statements in the copy constructors.) On using reserve(), this however does not happen as expected! I have the following questions Is it mandatory to always use the reserve statement? Does vector does a reallocation every time I do a push_back; or does it happen because I am debugging?

    Read the article

  • Best way to run multiple queries per second on database, performance wise?

    - by Michael Joell
    I am currently using Java to insert and update data multiple times per second. Never having used databases with Java, I am not sure what is required, and how to get the best performance. I currently have a method for each type of query I need to do (for example, update a row in a database). I also have a method to create the database connection. Below is my simplified code. public static void addOneForUserInChannel(String channel, String username) throws SQLException { Connection dbConnection = null; PreparedStatement ps = null; String updateSQL = "UPDATE " + channel + "_count SET messages = messages + 1 WHERE username = ?"; try { dbConnection = getDBConnection(); ps = dbConnection.prepareStatement(updateSQL); ps.setString(1, username); ps.executeUpdate(); } catch(SQLException e) { System.out.println(e.getMessage()); } finally { if(ps != null) { ps.close(); } if(dbConnection != null) { dbConnection.close(); } } } And my DB connection private static Connection getDBConnection() { Connection dbConnection = null; try { Class.forName(DB_DRIVER); } catch (ClassNotFoundException e) { System.out.println(e.getMessage()); } try { dbConnection = DriverManager.getConnection(DB_CONNECTION, DB_USER,DB_PASSWORD); return dbConnection; } catch (SQLException e) { System.out.println(e.getMessage()); } return dbConnection; } This seems to be working fine for now, with about 1-2 queries per second, but I am worried that once I expand and it is running many more, I might have some issues. My questions: Is there a way to have a persistent database connection throughout the entire run time of the process? If so, should I do this? Are there any other optimizations that I should do to help with performance? Thanks

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • Windows 2008 VPS hosting experiences

    - by Luke Bennett
    Whilst similar questions exist, I couldn't find any which quite match my request. I'm looking for hosting for some personal .NET projects which for various reasons I do not want to host on our servers at work. I need to be able to host multiple sites and for that reason I'm thinking of a VPS with RDP access for the time being - don't fancy shared hosting as I feel that doesn't offer me the flexbility and control I'm looking for. What experiences do people have of Windows 2008 VPS providers? I've come across a few possibilities although it seems a lot of places are still on Windows 2003 with 2008 'coming soon'. Is VPS the best way to go? Eventually (depending on how the projects take off) I intend to get a dedicated box but at this stage it's not cost-effective. Also, what are people's experiences of running SQL Server Express on a VPS? What would you say the minimum requirements are for CPU/memory? I know it's not going to be anywhere near as performant as SQL Server 2005/8 running on a dedicated box but I'm hoping it will be an acceptable starting point. Any other tips/advice also welcome! Edit: Forgot to mention, I'm ideally looking for UK hosting although I'm open to alternatives.

    Read the article

  • JBoss admin-console fails to load - missing Log4J jar?

    - by Jack
    I downloaded JBoss 5.1 and unzipped to ~/jboss/ such that JBoss is installed into: ~/jboss/jboss-5.1.0.GA/ I run the default deployment by using the following command found in jboss/jboss-5.1.0.GA/bin ./run.sh -c default While JBoss starts (http://127.0.0.1:8080/), admin-console is not deployed. The log file: jboss/jboss-5.1.0.GA/server/default/log shows the following information: DEPLOYMENTS IN ERROR: Deployment "vfsfile:/Users/jackwootton/jboss/jboss-5.1.0.GA/server/default/deploy/admin-console.war/" is in error due to the following reason(s): org.jboss.deployers.spi.DeploymentException: URL file:/Users/jackwootton/jboss/jboss-5.1.0.GA/server/default/tmp/az6n6v-tjilfb-h32fokxn-1-h32fosuo-v/admin-console.war/ deployment failed Deployment "vfszip:/Users/jackwootton/jboss/jboss-5.1.0.GA/server/default/deploy/quartz-ra.rar/" is in error due to the following reason(s): org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. The Log4J jar file exists in: jboss/jboss-5.1.0.GA/lib/jboss-logging-log4j.jar I have three questions: Have I understood the problem correctly (i.e. that admin-console cannot find the required Log4j JAR file and therefore is not deployed)? What can I do to fix this problem? Why would an out-of-the-box deployment have this problem in the first place?

    Read the article

  • Getting all the classes in project and their packages from CodeRush extension

    - by drasto
    I've spent some time trying to find a way CodeRush could add using when it finds undeclared element that is in the fact class name with no using added. The solution suggested in this answer to my question (Refactor_resolve) does not work (bugged?). In a process I found out that writing plug-ins for CodeRush is easy, so I decided to code this functionality myself (and share it). I'd only implement a CodeProvider (like in this tutorial). The only thinks I need to do the job are answers to this questions: At the start up of my plugin I need to get a list (set, map, whatever) of all classes and their packages. This means all the classes(interfaces...) and their packages in project, and within all referenced libraries. And I also need to receive updated on this (when user adds reference, creates new class). Can I get this from some CodeRush classes or maybe VS interface available from CodeProvider class? How do I add created CodeProvider to the pop-up that is shown when user hovers over an Issue?

    Read the article

  • ASP.Net MVC: Change Routes dynamically

    - by Frank
    Hi, usually, when I look at a ASP.Net MVC application, the Route table gets configured at startup and is not touched ever after. I have a couple of questions on that but they are closely related to each other: Is it possible to change the route table at runtime? How would/should I avoid threading issues? Is there maybe a better way to provide a dynamic URL? I know that IDs etc. can appear in the URL but can't see how this could be applicable in what I want to achieve. How can I avoid that, even though I have the default controller/action route defined, that default route doesn't work for a specific combination, e.g. the "Post" action on the "Comments" controller is not available through the default route? Background: Comment Spammers usually grab the posting URL from the website and then don't bother to go through the website anymore to do their automated spamming. If I regularly modify my post URL to some random one, spammers would have to go back to the site and find the correct post URL to try spamming. If that URL changes constantly I'd think that that could make the spammers' work more tedious, which should usually mean that they give up on the affected URL.

    Read the article

  • mysql select update

    - by Tillebeck
    Hi I have read quite a few selcet+update questions in here but cannot understand how to do it. So will have to ask from the beginning. I would like to update a table based on data in another table. Setup is like this: - TABLE a ( int ; string ) ID WORD 1 banana 2 orange 3 apple - TABLE b ( "comma separated" string ; string ) WORDS TEXTAREA 0 banana -> 0,1 0 orange apple apple -> BEST:0,2,3 ELSE 0,2,3,3 0 banana orange apple -> 0,1,2,3 Now I would like to for each word in TABLE a append ",a.ID" to b.WORDS like: SELECT id, word FROM a (for each) -> UPDATE b SET words = CONCAT(words, ',', a.id) WHERE b.textarea like %a.word% Or even better: replace the word found in b.textarea with ",a.id" so it is the b.textarea that ends up beeing a comma separeted string of id's... But I do not know if that is possible.

    Read the article

  • Iphone Core Data Internal Inconsistency

    - by kiyoshi
    This question has something to do with the question I posted here: http://stackoverflow.com/questions/1230858/iphone-core-data-crashing-on-save however the error is different so I am making a new question. Now I get this error when trying to insert new objects into my managedObjectContext: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '"MailMessage" is not a subclass of NSManagedObject.' But clearly it is: @interface MailMessage : NSManagedObject { .... And when I run this code: NSManagedObjectModel *managedObjectModel = [[self.managedObjectContext persistentStoreCoordinator] managedObjectModel]; NSEntityDescription *entity =[[managedObjectModel entitiesByName] objectForKey:@"MailMessage"]; NSManagedObject *newObject = [[NSManagedObject alloc] initWithEntity:entity insertIntoManagedObjectContext:self.managedObjectContext]; It runs fine when I do not present an MFMailComposeViewController, but if I run this code in the - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { method, it throws the above error when creating the newObject variable. The entity object when I use print object produces the following: (<NSEntityDescription: 0x1202e0>) name MailMessage, managedObjectClassName MailMessage, renamingIdentifier MailMessage, isAbstract 0, superentity name (null), properties { in both cases, so I don't think the managedObjectContext is completely invalid. I have no idea why it would say MailMessage is not a subclass of NSManagedObject at that point, and not at the other. Any help would be appreciated, thanks in advance.

    Read the article

  • Spork doesnt reload code

    - by there-is-no-spoon
    I am using following gems and ruby-1.9.3-p194: rails 3.2.3 rspec-rails 2.9.0 spork 1.0.0rc2 guard-spork 0.6.1 Full list of used gems is available in this Gemfile.lock or Gemfile. And I am using this configuration files: Guardfile .rspec spec_helper.rb factories.rb If I modify any model (or custom validator in app/validators etc) reloading code doesnt works. I mean when I run specs (hit Enter on guard console) Spork contain "old code" and I got obsolete error messages. But when I manually restart Guard and Spork (CTRC-C CTRL-d guard) everything works fine. But it is getting tired after few times. Questions: Can somebody look at my config files please and fix error which block updating code. Or maybe this is an issue of newest Rails version? PS This problem repeats and repeats over some projects (and on some NOT). But I haven't figured out yet why this is happens. PS2 Perhaps this problem is something to do with ActiveAdmin? When I change file in app/admin code is reloaded.

    Read the article

  • synchronize threads - no UI

    - by UshaP
    I'm trying to write multithreading code and facing some synchronization questions. I know there are lots of posts here but I couldn't find anything that fits. I have a System.Timers.Timer that elapsed every 30 seconds it goes to the db and checks if there are any new jobs. If he finds one, he executes the job on the current thread (timer open new thread for every elapsed). While the job is running I need to notify the main thread (where the timer is) about the progress. Notes: I don't have UI so I can't do beginInvoke (or use background thread) as I usually do in winforms. I thought to implement ISynchronizeInvoke on my main class but that looks a little bit overkill (maybe I'm wrong here). I have an event in my job class and the main class register to it and I invoke the event whenever I need but I'm worrying it might cause blocking. Each job can take up to 20 minutes. I can have up to 20 jobs running concurrently. My question is: What is the right way to notify my main thread about any progress in my job thread? Thanks for any help.

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >